Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Retarding Potential? Concerning the photoelectric effect, my textbook never defines what retarding potential is, and the internet isn't really clear on it either. I'm getting the sense that retarding potential is just the potential of an EM field, but why is it specifically labeled "retarding"? Is it decreasing with time or something?
| Retarding potential is not related specifically to change in time. It is related to polarity of the field. A retarding potential is rejecting photoelectrons from reaching the receiving electrode. So it will be negative on the receiving electrode compared to the photoelectrode. If its extremely negative, it will reject all photoelectrons and circuit current will cease.
As opposed to retarding potential, you have accelerating potential with opposite voltage drop.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If we had enough energy available in particle colliders, what reactions could show up if the quark and electron fields weren't fundamental? Suppose the quark and lepton fields weren't the fundamental fields of Nature, but that a "deeper" Lagrangian connected to a generic model of sub-quarks and
-leptons would take over the conventional ones at certain energy (in other words, the current Lagrangians connected with the quark and lepton fields are approximations). And also suppose we had enough energy available in some super high energy collider (or maybe in the LHC).
What events taking place in this collider (or maybe the LHC, as said) would (could) convince us that the quark and lepton fields are not the fundamental fields existing in Nature? With the current energies used in the LHC (the most powerful in the world, but correct me if I'm wrong) we can only conclude the quark and lepton fields are the basic fields (they are probed to distances of about $10^{-17}(m)$), but what if they are not?
The energies at which the eventual non-basal character of the quark and lepton fields would show up does, of course, vary with the model, but that would be no obstacle since we have enough energy at our disposal.
| It depends on which particles you are colliding and at which energy. When colliding a fundamental particle (or, fundamental-like at the energy scale of your experiment) with a composite particle, you would observe deep inelastic scattering between the two. This has various consequences, probably the most important being Bjorken scaling (or an analogous in this context).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Energy density in string wave The total energy density in a harmonic wave on a stretched string is given by
$$\frac{1}{2}p A^2 \omega^2 sin^2(kx-\omega t).$$
We can see that this energy oscillates between a maximum and a minimum. So the energy is maximum at 0 displacement when the string is stretched and at its maximum speed (both KE and PE density are maximum at the same time) and minimum when the displacement is maximum as it is unstretched and doesnt have any velocity.
This makes sense but I am having trouble merging this with SHM oscillations. In SHM the KE and PE are not in phase. And if we consider each particle of the wave acting as a shm oscillator then would the PE not be maximum at the maximum displacement?
| Since net energy (potential and kinetic) in a stretched string is a constant in space and time for a uniformly travelling wave, the total energy density must also be a constant.
However, the expression you have written is for kinetic energy density. Working from $y= A \cos(kx-\omega t)$ with $\mu$ as the mass density, we can write $$\mathrm{d}K = \frac{1}{2}\mu v^2 \mathrm{d}x \\\frac{\mathrm{d}K}{\mathrm{d}x} = \frac{1}{2}\mu \omega^2 A^2 \sin^2(kx-\omega t)$$
Similarly, potential energy density can be derived by applying our knowledge of spring and finding the effective spring constant for a stretched string. You will find $$\frac{\mathrm{d}U}{\mathrm{d}x} = \frac{1}{2}k^2FA^2\cos^2(kx-\omega t)$$
Upon calculations, you will find $k^2F = \mu \omega^2$
The total energy density is a constant since$$\frac{\mathrm{d}E}{\mathrm{d}x} = \frac{\mathrm{d}U}{\mathrm{d}x} + \frac{\mathrm{d}K}{\mathrm{d}x} = \frac{1}{2}\mu\omega^2A^2$$
It just switches back and forth between potential and kinetic energy twice every cycle. Since the average of either $\cos^2 \theta$ or $\sin^2 \theta$ is $1/2$, the energy density is on average shared equally between kinetic and potential energy.
You can see this is relatable to SHM. As you expect, the potential energy density is maximum for maximum displacement.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Ratio of specific heats of mixture of gases Suppose I have $1$ mole of a monoatomic gas and $1$ mole of a diatomic gas. If I mix them, the ratio of their specific heats at constant pressure to that constant volume becomes:$$\gamma = \frac{3}{2}$$
I came up with this result be averaging $C_p$ and $C_v$ of both the gases over $1+1=2$ moles. My question is why this works? Why averaging out gives the right solution? How will I go about calculating the $\gamma$ of mixture of gases if there are more than $1$ mole in each gaseous system?
| The change in internal energy and enthalpy of mixing ideal gases is zero. According to Gibbs' Theorem, the individual contribution of each species in an ideal gas mixture to the extensive thermodynamic properties of the mixture is the same as that of the pure species at the same temperature and at the partial pressure of the species in the mixture.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Where is humidity? During hot and humid weather, we sweat incessantly due to high humidity. But when we sit under a fan, we feel cold and comfortable. Why do we feel cold and chilled? Why don’t we feel the humidity?
| The reason you are sweating is not high humidity directly. It is high temperature. But high humidity prevents the body from cooling down and thus indirectly causes sweating.
*
*Temperature: The body starts sweating when it becomes too hot. The evaporation of sweat into the air absorbs energy, so by sweating, the body has a mechanism for cooling itself down.
*Humidity: Now, the dryer the air, the more water can evaporate into the air. In 0 % humidity with no water present in the air, the sweat evaporation is fast and cooling very effective. In 100 % humid air where the air is "stuffed" and can contain no more water, the evaporation process stops. Then suddenly sweating doesn't work and the body cannot cool effectively anymore - which causes it to sweat even more.
When sitting under a fan, the air is constantly circulated. Convection increases. The air close to your body absorbs the sweat through evaporation, and quickly it is replaced by new air that can start a new evaporation and so on. This is the function of a ceiling fan.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Supersymmetry beyond $D=11$ spacetime dimensions Taking into account the higher spin theories, from which string theory is an effective field theory, I just wondering if there is something to do to extend supersymmetry to any dimension without any mathematical consistency failure. Could string-theory/M-theory be formulated in such a way fields of ANY spin naturally occur? After all, we have F-theory or S-theory in 12D and 13D.
| The papers https://arxiv.org/abs/1409.2476 and https://arxiv.org/abs/1504.00602 by Choi discuss supergravity in twelve dimensions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why do gravitational mass and inertial mass appear to be indistinguishable? I have learnt that heavier the object is (the more gravitational mass it has), the more resistance to the change of motion it is (the more inertial mass it has).
I can accept this fact but I can't find out the reason behind it. What dynamic, what phenomena could cause this? Does it have something to do with the atomic structure of the object?
| Picture it like this. Imagine you have 2 crates. In each crate there is pure iron. But one box has 2 times the number of iron atoms, so it is has twice the mass. The weight of box should be negligable.
Now imagine that they are both moving at the same speed. When you apply same force on both boxes, the box with twice the number of iron atoms is being slowed less (has bigger mass) because you have to stop each atom of iron that has it's own same kinetic energy compared to individual iron atoms in the other box. This means that you will not be able to slow them all as much, as you could slow just a few iron atoms in the other box. This is the reason behind inertia.
Each iron atom at certain speed has the same kinetic energy as every other iron atom (If you don't take into account termal motion).
This means that in bigger box, you have to add up all the iron atoms and multiply it with their movement energy. Because there are twice as many in other box (twice the mass) it will be harder to stop them all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
} |
Question about the the velocity and acceleration in tensor notation When computing the volicty of a particle moving along a curve parametrized by $Z^i(t)$ for each component i, the components of the velocity $V^i$ are given by $$V^i = (d/dt)Z^i$$ and the components fo the acceleration are given by $$A^i=(d/dt)V^i + \Gamma^i_{jk} V^j V^k.$$
My question is: why the derivative of the basis vectors doesn't appear in the expression for the velocity? Because the for the Christoffel symbol to appear in the acceleration expression there has to be a derivative in respect to the basis vectors. What am I missing here? Any help will be appreciated.
| “When computing the velocity of a particle moving along a curve“ I think perhaps the confusion arises from the ambiguity in the question: is the velocity of the particle itself is moving along the given curve or is the curve represents particle’s velocity (representing the change in its position ) ; does Z represent a scalar field or a vector field?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are those equations in error propagation related? I am trying to prove that the uncertainty of the equation $$Q = xy$$ is equal to $$\frac{\Delta Q}{Q_0} = \frac{\Delta x}{x} + \frac{\Delta y}{y}.$$
However what I am getting is $$\Delta Q =\sqrt{(y\Delta X)^ 2 + (x \Delta y)^²} $$ and I am stuck there.. How to continue to prove that it is equal to the second equation that I have written?
| *
*The formula is
$$\left ( \frac{\Delta Q}{Q_0} \right)^2 = \left ( \frac{\Delta x}{x} \right )^2 + \left ( \frac{\Delta y}{y} \right )^2. $$
*
*Your other, more general formula, is correct.
In your last step, divide by $Q_0 = xy$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is small work done always taken as $dW=F \cdot dx$ and not $dW=x \cdot dF$? I was reading the first law of thermodynamics when it struck me. We haven't been taught differentiation but still, we find it in our chemistry books. Why is small work done always taken as $dW=F \cdot dx$ and not $dW=x \cdot dF$?
| Because work $W$ is a force $F$ causing a change in position $\Delta x$.
Not just a force $F$ causing a position $x$. Or a change in a force $\Delta F$ causing a position $x$. Neither makes much sense. We are talking about a change in position - that is how work is defined.
And such a change $\Delta x$ is simply symbolized $dx$ when it is very, very (infinitely) tiny.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 8,
"answer_id": 0
} |
Probability of measuring eigenvalue of non-normalised eigenstate This came up while working on a question about measuring the angular momentum of a particle in a superposition of angular momentum eigenstates:
Given that:
$$\langle\theta,\phi|\psi\rangle \propto \sqrt{2} \cos(\theta) + \sin(\theta)e^{-i\phi} - \sin(\theta)e^{i\phi}$$
What are the possible results and the corresponding probabilities for measurements of $\hat{L}^2$ and $\hat{L}_z$?
$\hat{L}^2$ is simply $2\hbar^2$ as all three terms are eigenstates of $\hat{L}^2$ with eigenvalues $2\hbar^2$
However the three terms are eigenfunctions $\hat{L}_z$ with different eigenvalues, namely $0$, $\hbar$ and $-\hbar$. Now my question is whether I first have to normalise the eigenfunctions and then take the modulus squared of the coefficients to find the probabilites of measuring the corresponding eigenvalue, or whether it is possible to straight away write down: $$p(L_z=0)=\frac{|\sqrt{2}|^2}{|\sqrt{2}|^2+|1|^2+|-1|^2}$$
So basically my question is:
Given a wave function $|\psi\rangle$ and an operator $\hat{A}$, with eigenvalues $\lambda_i$ and non-normalised eigenfunction $|a_i\rangle$, and: $$|\psi\rangle = \sum_i{c_i|a_i\rangle}$$
Is it still true that the probability of obtaining a measurement $\lambda_i$ is given by $p_i=|c_i|^2$?
| Given a generic vector $\psi$, the expectation value of an observable $A$ over it is given by
$$\operatorname{EXP}_\psi[A] =\frac{\langle\psi|A|\psi\rangle}{\langle\psi|\psi\rangle}.$$
You can check the normalisation by using the identity operator in place of $A$. Hence you get a well-defined state on the C*-algebra of observables.
The probability of obtaining $\lambda_i$ can be obtained by evaluating the expectation value of the projection $E_i=|a_i\rangle\langle a_i|$, and this shows you that you are missing a "corrective" factor of $\frac1{\langle\psi|\psi\rangle}$. If your vector state is not normalised, then $\langle\psi|\psi\rangle\neq1$ and therefore it must be taken into account.
The point is that, even though you are not forced to use normalised states, sooner or later you will have to normalise anyway, else you'd end up with a total probability that differs from 1.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Recognizing speech at 1bit quantise depth? i found on german wikipedia an audio example of 1 bit depth quantising, where the speech still can be recognized. how is it possible if at 1 bit depth we have just two values: "signal" and "no-signal"?. here is the examle: https://upload.wikimedia.org/wikipedia/commons/4/43/Ampl1rp.ogg
| 1-bit quantization involves sign-only sampling, which can be done at fantastic rates, well over 10^9 samples per second. For purposes of speech recognition, frequencies over 6 kHz are irrelevant. If a low-frequency signal plus wideband noise is subjected to 1-bit quantization, nonlinear phenomena can sometimes interfere with extraction, but they don’t always do that. Consider the following cases:
(1) A low-frequency signal of high amplitude plus wideband (think white) noise of low amplitude. The noise can throw off the recorded sign of the signal whenever the signal crosses zero, but this sort of error averages to zero. A bigger problem is that sinusoidal signals get converted to square waves, so there are nasty intermods -- beats and harmonics. In the simple case of square waves, the harmonics contain only 20% of the power. Ratios of amplitudes will be distorted in mixtures of overtones, possibly preventing the identification of vowels distinguished by ratios of overtones in the formant regions of the spectrum.
(2) A low-frequency signal of low amplitude plus wideband gaussian noise of high amplitude. The weak signal will register only when the noise happens to be smaller, but that happens often enough to allow extraction with minimal distortion by intermods. There will be 2 dB of nonlinear suppression relative to many-bit quantization. Ultimately, a high sampling rate saves the day. Averaging N uncorrelated samples will enhance the signal-to-noise ratio by a factor of N.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How does a distant observer see matters that "form" an initial black hole? I did see Can black holes form in a finite amount of time? but it does not seem to discuss how a distant observer would see evolution of collapsing matters that form a black hole. Does it view these matters as disappearing under the horizon, or does it see being radiated back by Hawking radiations, with a distant observer unable to actually see matters falling into the horizon?
| Strictly speaking the far away observer never seen the black hole forming, and after a while it will start receiving Hawking radiation, until nothing is left.
The evaporation process can be thought as starting slightly outside the horizon (at a Stretched horizon). This is pictured in the image on the left, a Penrose diagram of an evaporating black hole (remember that light travel at $45°$ degrees here)
Have a look also at:
From where (in space-time) does Hawking radiation originate?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/383194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is so special about the factor $\sqrt{1-{v^2/c^2}}$ in special relativity? I am studying a book about relativistic equations and special relativity, and I keep seeing $\sqrt{1-{v^2/c^2}}$ everywhere. It is not, as with most of the concepts in special relativity, simply a mathematical construct; it is a logical consequence of accepting the experimental fact that the speed of light is the same in every inertial reference frame. Why, then, is this expression so significant?
| The reciprocal Lorentz factor
$$\gamma^{-1}~=~\sqrt{1-{v^2/c^2}}~=~\frac{d\tau}{dt}~<~1$$ is e.g. the ratio between proper time $d\tau$ and coordinate time $dt$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/383290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Does special relativity imply that time dilation is affected by an orientation of clocks? Many texts about STR time dilation use as an example thought experiment with 2 mirror photon clock.
The conclusion of this experiment is: In a frame moving relative to the clock, they will appear to be running more slowly.
As I understand it, this is just a visual effect, it doesn't mean that processes in the system with clocks are affected by someone observing it from moving frame.
I can't imagine any other interpretation of this, cause this would result in all sorts of paradoxes, like what if there are 3 clocks oriented: parallel, perpendicular and at 45 degree relative to direction of moving frame. If you visualise light path from moving frame perspective like it is done in wiki link above, and do analogical interpretation, this would imply that some of the 3 clocks in same frame are running slower and some are faster, depending on orientation.
According to the same wiki page this time dilations are not just visual effect and do change behavior of objects, here is a quote from 2nd paragraph:
Such time dilation has been repeatedly demonstrated, for instance by small disparities in a pair of atomic clocks after one of them is sent on a space trip, or by clocks on the Space Shuttle running slightly slower than reference clocks on Earth, or clocks on GPS and Galileo satellites running slightly faster.
So if we continue our analogy, we can take 4 pairs of atomic clocks, and send 3 of them on a space trip oriented differently, we would get different time results on them.
We can even continue this absurd, and remind "twins paradox", and conclude that the one that was perpendicular to moving frame would become older....
| I would resolve this problem for any arbitrary angle of the light clock inclination to show that the time dilation is independent of the light clock's orientation. If the clock is inclined at an angle $\theta^\prime$ in its rest frame, this angle changes into $\theta$ from the viewpoint of the lab observer WRT whom the light clock moves at $v$ so that we have: [See the attached Figure.]
$$\cos\theta'=\frac{x'}{L'}\space and \space \cos\theta=\frac{\alpha x'}{L} \tag{1&2}$$
Recall that $\alpha$ is the reciprocal of the Lorentz factor. Moreover, we have:
$$\tan\theta'=\frac{y'}{x'}\space and \space \tan\theta=\frac{y'}{\alpha x'} \tag{3&4}$$
Eqs. (1&2) and Eqs. (3&4) repectively imply:
$$\frac{\cos\theta'}{\cos\theta}=\frac{L}{\alpha L'} \space and \space \frac{\tan\theta'}{\tan\theta}=\alpha \tag{5&6}$$
Eqs. (5&6) yield:
$$\frac{L}{L'}=\frac{\alpha/\cos\theta}{\sqrt{1+\alpha^2\tan^2\theta}}=\frac{c\alpha}{\sqrt{c^2-v^2\sin^2\theta}} \tag{7}$$
Now, using the cosines law for $\Delta ABC$, we get:
$$c^2t_1^2=v^2t_1^2+L^2-2vt_1L\cos(\pi-\theta) \rightarrow $$
$$t_1=\frac{v\cos\theta+\sqrt{c^2-v^2\sin^2\theta}}{c^2-v^2}L \tag{8}$$
using the cosines law for $\Delta BCD$, we finally get:
$$c^2t_2^2=v^2t_2^2+L^2-2vt_2L\cos\theta \rightarrow $$
$$t_2=\frac{-v\cos\theta+\sqrt{c^2-v^2\sin^2\theta}}{c^2-v^2}L \tag{9}$$
For $t=t_1+t_2$, we have:
$$t=\frac{2L\sqrt{c^2-v^2\sin^2\theta}}{c^2-v^2}\tag{10}$$
As we know the time measured by the observer in the light clock's rest frame is $t^\prime=2L'/c$, thus we can write:
$$\frac{t}{t'}=\frac{c\sqrt{c^2-v^2\sin^2\theta}}{c^2-v^2}\frac{L}{L'} \tag{11}$$
Substituting Eq. (7) into Eq. (11), we get:
$$\frac{t}{t'}=\frac{1}{\alpha} \tag{12}$$
Therefore, the time dilation is independent of the light clock's orientation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/383461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 2
} |
What should the scale show?
If $g=10 N/kg$ , what will show the Scales?
Yes, I know this is a very simple problem. But, I am stuck.
$$P=mg \Rightarrow m=\frac{100N+100N}{10N/kg}=20 kg .$$
But, I'm worried about this answer. What's the difference between hand holding the scales and putting them on the table? I think, if the Scales is captured manually, the force applied from above should be meaningless. And answer must be $m=10 kg.$ Is it correct?
| The answer is $10 kg$. When you apply Hooke's law to one end of a spring: $F=-kx$, it is implicit that the other end is fixed in place by a force $-F$. This force may be applied by the wall, hand, another mass, etc...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/383646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Magnetic force direction Good day All!
while trying to solve this question
I used the right hand rule and according to it the Force should be directed outwards (pointing toward me)
but here is the answer that puzzeld me
I really don't get why it is down , and would feel very grateful if someone can explain me the reason
thanks in advance!!!
| The easiest way to find the force on the coil is probably the magnetic Lorentz force law $$ \vec F=q\vec v \times \vec B$$ The current at any point in the loop corresponds to a positive charges $q$ moving in the current direction with velocity $\vec v$. Thus, at any element of the coil, you take the right-hand -rule for the cross product to find the direction of the magnetic force. Since the magnetic field is inhomogeneous and at the wire $\vec B$ is not parallel to the axis but inclined towards the axis, there is a force component pointing downwards so that in the sum over all elements there is a resultant magnetic force downwards on the coil.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/384200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does an aerial respond to any frequency? what is the range of frequencies that will produce a signal in an aerial?
would a frequncy of 1 Hz be effective with appropriate power? and what about an upper limit, do frequncies in the region of visible light or x-rays produce an oscillation of charges in an aerial?
| Antennas receive electromagnetic radiation by the electrons in the antenna interacting with the electric field of the incoming wave and generating a detectable current.
This is a table of electromagnetic radiation:
As long as the wavelength of the radiation is large, the antenna will respond to the fields , so some signal will be there. For visible light ( the small colored band on the left, go to the link to read the legend) the wavelengths are small, of the order of atomic distances, and as we know light scatters off metals or is absorbed and it heats them. No antenna function, i.e. current, can build up, and the same is true for smaller wavelengths.
For higher wavelengths, where waves can be modulated and carry a signal, there is frequency dependence.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/384359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do electrons emit phonons instead of photons? Why do electrons emit phonons when they "relax" into the minimum energy level of the conduction band after getting into it from the valence band by absorbing a photon with an energy higher than their bandgap? Why don't they simply emit a photon with an energy equivalent to the energy of the phonon emitted? In other words, why a phonon and not a photon?
| "To conserve the k-vector." To find a place in the band diagram, an electron should have right k and right energy E. Please see any E-k diagram of the conduction band. Emitting a photon will only lower the energy of the electron with unaltered k value. But if it emits a phonon, both k and E are reduced such that it finds a suitable place in the E-k plot.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/384558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Am I checking incompressibility of a velocity flow correctly? My velocity flow is defined by $u_r$, $u_{\theta}$, $u_x$. This makes the strain rate tensor of the velocity flow equal to:
$J_{ij} = \begin{bmatrix} u_{rr} & u_{r\theta} & u_{rx} \\
u_{\theta r} & u_{\theta \theta} & u_{\theta x} \\
u_{xr} & u_{x\theta} & u_{xx} \end{bmatrix}$
Where $J$ can be split into a symmetrical part $\mathcal{D}$ and a anti symmetrical part $\Omega$ which are defined as:
$\mathcal{D_{ij}} = \frac{1}{2} (u_{ij} + u_{ji}), \quad \mathcal{D}^T = \mathcal{D}\\
\Omega_{ij} = \frac{1}{2} (u_{ij} - u_{ji}), \quad \Omega^T = - \Omega$
I now have to check whether or not the flow field is incompressible. And to me it seems that compressibility is defined by the diagonal terms in $J$, and vorticity is described by the off diagonal terms in $J$. Am I right when I say the following thing:
$\mathcal{D_{ij}} = \frac{1}{2} (u_{ij} + u_{ji}), \quad \mathcal{D}^T = \mathcal{D} \equiv \frac{1}{2}(2\cdot u_{rr} + 2\cdot u_{\theta \theta} + 2\cdot u_{xx})$
And that if I show that this equation is equal to zero the fluid flow is incompressible?
| The continuity equation in cylindrical coordinates is $$\frac{1}{r}\frac{\partial (ur)}{\partial r}+\frac{1}{r}\frac{\partial v}{\partial\theta}+\frac{\partial w}{\partial z}=0$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/385033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't my kitchen clock violate thermodynamics? My kitchen clock has a pendulum, which is just for decoration and is not powering the clock. The pendulum's arm has a magnet that is repelled by a second magnet that is fixed to the clocks body. The repelling magnets are at their closest when the pendulum is at its lowest point.
We all (hopefully) agree that a regular pendulum would eventually slow down due to friction. But I honestly cannot recall ever seeing the clock's pendulum at rest.
By my calculations the magnet would slow the pendulum as it falls but accelerate it as it swings up the other side. So how would a magnet actually create any net benefit to the pendulum?
Will the pendulum eventually stop, or if not, how is it not violating the laws of thermodynamics?
| The pendulum is being driven by the magnet: the fixed magnet in the clock is actually the pole of an electromagnet which the clock is using to drive the pendulum: the clock is putting energy into the pendulum via the electromagnet. Almost certainly the clock 'listens' for the pendulum by watching the induced current in the electromagnet, and then gives it a kick as it has just passed (or alternatively pulls it as it approaches).
People have used techniques like this to actually drive a time-keeping pendulum (I presume this pendulum is not keeping time but just decorative) but I believe they are not as good as you would expect them to be, because the pendulum is effectively not very 'free'. 'Free' is a term of art in pendulum clock design which refers to, essentially, how much the pendulum is perturbed by the mechanism which drives it and/or counts swings, the aim being to make pendulums which are perturbed as little as possible. The ultimate limit of this is clocks where there are two pendulums: one which keeps time and the other which counts seconds to decide when to kick the good pendulum (and the kicking mechanism also synchronises the secondary pendulum), which are called 'free pendulum' clocks.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/385298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 0
} |
Incompressible 2D Navier-Stokes equation I am trying to solve for and simulate the vorticity numerically (finite difference method), however there's one part I was hoping to get some help with.
I need to find the fluid velocity $\mathbf{u}$ from the vorticity $\omega$. I can write
$$\mathbf{u} = (\nabla \phi) \times \mathbf{\hat{z}} + \mathbf{u_0} ,$$
where $\mathbf{u_0} $ is known and $ \phi $ is the fluid field potential and to find $ \phi $ we solve
$$ \mathbf{\hat{z }} \cdot (\nabla \times \mathbf{u}) \ = \nabla^2 \phi \ = \ \omega.$$
This is a problem with periodic boundary conditions and I know that the velocity won't change if $ \phi $ is changed by a constant so I could choose a point in the plane such that $ \phi = 0 $. And this is where I don't know how to proceed. I'd really appreciate some help.
How could I go about choosing this point?
| If you want to solve for $\phi$, you could add in a condition,
$$
\int_\Omega \phi\,{\rm d}V =0,
$$
which would make the solution unique. To do this numerically, you would probably need to use a Lagrange multiplier.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/385601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If fluids have zero shear modulus, how do I make sense of graphs like strain rate vs shear stress (to classify fluids as Newtonian or non-Newtonian)? Following the definition on the wiki: Fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it.
If fluids have zero shear modulus, shouldn't the shear stress be zero regardless of the strain rate?
Thanks
| Solids have a shear modulus that relates the shear stress to the shear strain. Liquids have a viscosity that relates the shear stress to the shear strain rate. Apply a shear stress to a solid and it deforms a bit, reaching a new equilibrium shape that remains motionless until the stress is removed. Apply a shear stress to a liquid and it continues to deform at a constant rate until the stress is removed. No matter how small the stress, the strain will become arbitrarily large given enough time: shear modulus of liquids is zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/385803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Discontinuity of metric derivatives in the Israel junction formalism It is often said that given the metrics $g^+$, $g^-$ on two sides of a hypersurface $\Sigma$, then, with a level-set function $\phi$ such that $\Sigma = \phi^{-1}(0)$, we can describe the metric on the whole manifold by
\begin{equation}
g = \theta(\phi) g^+ + (1 - \theta(\phi)) g^- \tag{1}
\end{equation}
And then, the derivatives of the components are simply
\begin{equation}
g_{ab,c} = \partial_c \theta(\phi) (g^+ - g^-) + \theta(\phi) g^+_{ab,c} + (1 - \theta(\phi)) g^-_{ab,c}\tag{2}
\end{equation}
and since it is assumed that $g$ is continuous,
\begin{equation}
g_{ab,c} = \theta(\phi) g^+_{ab,c} + (1 - \theta(\phi)) g^-_{ab,c}\tag{3}
\end{equation}
The discontinuity in the derivatives is then said to be
\begin{equation}
[g_{ab,c}] = \gamma_{ab} n_c\tag{4}
\end{equation}
for $n$ a normal form to $\Sigma$ and $\gamma_{ab}$ some tensor, and the notation corresponds to
\begin{equation}
[F] = \lim_{p \in M^+ \to \Sigma} F(p) - \lim_{p \in M^- \to \Sigma} F(p)\tag{5}
\end{equation}
The proof for this seems rather elusive, but according to Clarke and Dray, this stems from the fact that for $v$ some vector field such that $g(v, n) = 0$, with $n$ some extension of the normal form (I'm guessing via the normal bundle of the surface), we have
\begin{equation}
v^c[g_{ab,c}] = v^c [g_{ab}]_{,c} = 0\tag{6}
\end{equation}
which then implies that $[g_{ab,c}] = \gamma_{ab} n_c$. I'm not quite sure how to show this. Expanding everything, I get
\begin{equation}
(\lim_{p \in M^+ \to \Sigma} \theta v^c g^+_{ab,c} - \lim_{p \in M^- \to \Sigma} (1 - \theta) v^c g^-_{ab,c})\tag{7}
\end{equation}
given coordinates with tangent vectors $(n, \partial_\alpha)$, we can decompose this as
\begin{equation}
v^c g^\pm_{ab,c} = v^\alpha g^\pm_{ab,\alpha}\tag{8}
\end{equation}
since $v$ has no $n$ component. How to show that this quantity is then continuous upon crossing the boundary? Do I need to define the first fundamental form for every hypersurface $\Sigma_\varepsilon$ along the normal bundle of coordinate $\varepsilon$ and show that this is continuous?
| FWIW, interestingly, the Israel junction conditions are born out of mathematical necessity to avoid ill-defined products$^1$ of distributions rather than actual physical considerations. See e.g. Refs. 1 & 2 for details.
References:
*
*Eric Poisson, A Relativist's Toolkit, 2004; Section 3.7.
*Eric Poisson, An Advanced course in GR; Section 3.7.
--
$^1$ We ignore Colombeau theory. See also this Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/385924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
References for examples of $\ast$-algebra approach to QM and QFT In studying QFT on curved spacetime I've found the $\ast$-algebra approach as one viable approach to the subject on the paper Quantum Fields in Curved Spacetime by Wald.
The $\ast$-algebra approach seems like one quite nice and general approach to both QM and QFT, but it is quite abstract, so that it seems hard on the beginning of the study of this approach to see how it is used in practice and how, in the end, it is just a generalization of usual QM.
The point is that in QM books, like Cohen's, after presenting the postulates, examples are given to emphasize what is the physical meaning of everything and how one works with it, like spin 1/2 systems, the harmonic oscillator and so forth.
Although Wald shows some examples, I believe more examples and more details would be nice to get started.
What I'm looking for here are references showing examples of the $\ast$-algebra approach in practice for both QM and QFT. In other words: some simple examples showing how to connect the abstractness of the approach with the underlying physics and the usual approaches.
Any kind of reference is good: books, papers, lecture notes, video lectures, etc.
| Lecture notes exposing standard perturbative quantum field theory this way are on PF-Insights A first Idea of Quantum Field Theory. The star algebra perspective ("quantum probability theory") comes alive with the introduction of the free field vacuum state in section 4 of chapter 14. Free quantum fields.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How do acoustic enclosures for AC condensers work, without stopping airflow? How do acoustic enclosures for AC condensers like THESE work so they allow A) airflow to / from the condensers, B) but also reduce sound output from the condensers ?
Arent (A) and (B) at odds with each other ?
| No, they aren't, and here's why: an acoustic enclosure is a low-pass filter which allows free movement of extremely low frequencies (i.e., steady flow of air) into or out of the enclosure while blocking the escape of higher frequencies (motor and fan noise). This is the same job performed by the muffler on your car: it is designed so that a steady stream of exhaust gas can easily flow out of it, but the sharp impulses that contain lots of high-frequency content are blocked and internally dissipated inside the muffler.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sounds in Space, vibration of virtual particles Sound is only available to travel through a transmission medium. My question is due to space not truly being empty, more specifically there are virtual particles in a vacuum, can sound be propagated through space? Further could the speed of sound in this medium be an indicator of the density of virtual particles and therefore the vacuum energy?
| @AccidentalFourierTransform's link, which he references above, furnishes a mathematical description of how virtual particles enter the picture of particle-particle interactions. @shai horowitz, the important takeaway for you here is that virtual particles are in principle undetectable in any experiment which means they cannot transmit sound impulses through space.
To transmit sonic waves through space requires that the space be populated with real particles at a density sufficient to support acoustic waves. These acoustic waves have actually been detected- see Caleb Scharf's recent book, Gravity's Engines, for an accessible and engaging description of them.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many watts of electricity can the human body withstand without being killed? I'm talking about DC and/or AC. I've read about people surviving extremely high voltage shock (300kV) but that could be explained by extremely high resistance in the circuit that resulted insufficient current to cause death.
| The main driver for the effect of electricity on the human body is current, not Voltage or Power (Watts).
The interaction is complicated, so you can't easily apply a single number to "safe" or "deadly". Effects are can be neurological, chemical and thermal. See https://en.wikipedia.org/wiki/Electrical_injury
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is "Symmetry of Infinity" in electricity and magnetism? I have this problem from my E&M textbook:
Two infinitely long wires running parallel to the x axis carry uniform charge densities $+\lambda$ and $-\lambda$ (see photo). Find the potential at any point $(x,y,z)$, using the origin as your reference.
The solution to this uses a random point and solves the problem there:
It's stated that "due to the symmetry of infinity, we need only consider the z-y-plane. We plot an arbitrarily located point, without symmetry."
Once here I could do the math of this just fine, but I don't understand what "due to the symmetry of infinity" means. I tried to look it up online (including stack exchange) and all I could find were journals that were related to this. I could not access them, and even if I could I probably wouldn't understand what was going on anyway.
What is "the symmetry of infinity?" And how is it related to this problem?
| I essence this is a two dimensional problem in a yz-plane because you cannot reference an absolute x-position relative to a featureless (infinite) line of charge.
The electric field looks the same at every possible value of x and this is possibly where the term “symmetry of infinity” comes from.
physicspages.com is a good source of text book solutions including for this problem where the x-independence is explained in a different way.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Tension in the string of a pulley
In the diagram above why is the tension of the string attached to the pulley at "A"(the string attached to roof) equal to 2T?
Why is it not Mg+(M+m)g?(considering that the pulley is mass less)
I have trouble understanding
| The way I always think of it is that as soon as an object is accelerating, it is "using up" some of the force for acceleration. In this case the heavier object that is falling is "using up" some of the $(m+M)g$ force, so it can not use that full force anymore to pull on the string.
For the smaller weight it is the other way around, since it is accelerating up the string needs to pull harder than $Mg$.
You can see it is not going to be the simple sum of the two weights.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If Bohr model is outdated and we know that there is no such thing as an "electron orbital circumference" then how is $2\pi r=n\lambda$ still valid? We know that Bohr model is outdated and we know that there is no such thing as an "electron orbital circumference" then how is $2\pi r=n\lambda$ still valid?
Edit :
If the electrons for higher orbitals are not moving in a circular path then how do we write $2\pi r=n\lambda$?
| I must add some considerations.
*
*The fact that a model is outdated does not mean it must be discarded. Besides it is really useful to understand the development and the history of what we do, there are still some processes that can be explained with them. Thomson's model can still be used to derive some models of matter-radiation interation that work surprisingly well. Do not throw models to the bin, you shouldn't use quantum mechanics for the movement of a car, for example.
*Yes, Bohr model is outdated, but if it once was a model that's because it somehow worked. Rememebr that Bohr's model was only a model for hydrogen, not the rest.
*It was just a fortunate coincidence that, for a spinless particle (only orbital angular momentum) and for a Coulomb's potential (electromagnetic force only), it happens to occur that angular momenta can only be multiples of $\hbar$.
$$ 2\pi r=n\lambda \ \ \Rightarrow \ \ \frac{2\pi}{\lambda} r = n \ \Rightarrow \ kr=n \Rightarrow \hbar k=n\hbar \Rightarrow rp=n\hbar \Rightarrow L=n\hbar$$
And this is why Bohr's model worked, altough it is "incorrect" (for sure any theory we develop will always be innacurate). Of course, this is just an approximation. Spin changes it all, plus there are some more things, like the nucleus, and so on. This is a very ideal model.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/386927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Euler's Equations for Isentropic Flow Derivation I'm reading a book "A Mathematical Introduction to Fluid Mechanics" by Alexandre J. Chorin, and I came across the derivation of Euler's equations for isentropic flow. Page 15, the author goes from
$$\frac{d}{dt}\int_{W_t} (\frac{1}{2} \rho ||\vec{u}^2|| + \rho \epsilon ) dV = -\int_{\partial W_t} p \vec{u}\cdot \vec{n} dA + \int_{W_t}\rho \vec{u}\cdot \vec{b}dV
$$
to
$$\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u} = -\nabla \omega + \vec{b}$$
Now, this is supposed to be a compressible flow, so $\nabla \cdot \vec{u}$ is not necessarily equal to 0, and change in internal energy $\epsilon$ is not necessarily zero either.
The author writes
This follows from the balance of momentum using our earlier expressions for $(d/dt)E_{kinetic}$, the transport theorem, and $p = \rho^2 \frac{\partial \epsilon}{\partial \rho}$`
This is what I believe to be the earlier expressions for the $(d/dt)E_{kinetic}$
$$d/dt E_{kinetic} = \frac{d}{dt}\int_{W_t} (\frac{1}{2} \rho ||\vec{u}^2||)dV = \int _{W_t} \rho ( \vec{u}\cdot(\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u}))dV$$
When I tried to reach the result myself, I get stuck at:
$$
\frac{d}{dt}\int_{W_t} (\frac{1}{2} \rho ||\vec{u}^2|| + \rho \epsilon ) dV = -\int_{\partial W_t} p \vec{u}\cdot \vec{n} dA + \int_{W_t}\rho \vec{u}\cdot \vec{b}dV\\
\int_{W_t} (\rho(\vec{u}\cdot \frac{\partial \vec{u}}{\partial t} + \vec{u} \cdot ((\vec{u} \cdot \nabla)\vec{u})) + \rho \frac{D}{Dt}\epsilon ) dV = \int_{W_t} (- \nabla \cdot (p \vec{u}) + \rho\vec{u}\cdot \vec{b}) dV\\
\rho(\vec{u}\cdot \frac{\partial \vec{u}}{\partial t} + \vec{u} \cdot ((\vec{u} \cdot \nabla)\vec{u})) + \rho \frac{D}{Dt}\epsilon = - (\vec{u}\cdot(\nabla p) + p\nabla\cdot \vec{u}) + \rho\vec{u}\cdot \vec{b} \\
\rho\vec{u}\cdot(\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u}) + \rho \frac{\partial \epsilon}{\partial t} + \rho \nabla \cdot (\epsilon \vec{u})= - \vec{u}\cdot(\rho \nabla \omega) - p\nabla\cdot \vec{u} + \rho\vec{u}\cdot \vec{b} \\
$$
which doesn't seem to be reducible any further. UNLESS I presume it's incompressible, that is; $(D/Dt) \epsilon = 0$ and $\nabla \cdot \vec{u} = 0$. When I do, I can then do:
$$\rho\vec{u}\cdot(\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u}) + \rho \frac{\partial \epsilon}{\partial t} + \rho \nabla \cdot (\epsilon \vec{u})= - \vec{u}\cdot(\rho \nabla \omega) - p\nabla\cdot \vec{u} + \rho\vec{u}\cdot \vec{b} \\
\rho\vec{u}\cdot(\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u})= - \vec{u}\cdot(\rho \nabla \omega) + \rho\vec{u}\cdot \vec{b} \\
\frac{\partial \vec{u}}{\partial t} + (\vec{u} \cdot \nabla)\vec{u} = -\nabla \omega + \vec{b}$$
which is exactly the answer the book claims. But this equation is supposed to describe (together with equation of conservation of mass and boundary condition for trapped volume $\vec{u}\cdot \vec{n} = 0$) compressible isentropic flow. How do I get there?
| Let's start from
$$
\frac{\mathrm{d}}{\mathrm{d}t} \int_{W_t}\left(\tfrac{1}{2}\rho\rvert\rvert\mathbf{u}\rvert\rvert^2+\rho\epsilon\right)\mathrm{d}V = -\int_{W_t}p\mathbf{u}\cdot\mathbf{n}\mathrm{d}A + \int_{W_t}\rho\mathbf{u}\cdot\mathbf{b}\mathrm{d}V,
$$
and use the transport theorem and divergence theorem to obtan
$$
\int_{W_t}\left(\rho\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt} + \rho\frac{D\epsilon}{Dt}\right)\mathrm{d}V = \int_{W_t}\left(-\nabla\cdot(p\mathbf{u}) + \rho\mathbf{u}\cdot\mathbf{b}\right)\mathrm{d}V
$$
$$
\Longrightarrow\;\,\rho\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt} + \rho\frac{D\epsilon}{Dt} = -\nabla\cdot(p\mathbf{u}) + \rho\mathbf{u}\cdot\mathbf{b}.
$$
Now, we divide through by $\rho$ and use $\nabla w = \nabla p/\rho$ to obtain
$$
\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt} + \frac{\partial\epsilon}{\partial\rho}\frac{D\rho}{Dt} = -\mathbf{u}\cdot\nabla w - \frac{p}{\rho}\nabla\cdot\mathbf{u} + \mathbf{u}\cdot\mathbf{b}.
$$
Finally, using $D\rho/Dt=-\rho\nabla\cdot\mathbf{u}$ and $p/\rho = \rho\partial\epsilon/\partial\rho$, we find
$$
\mathbf{u}\cdot\frac{D\mathbf{u}}{Dt} - \frac{p}{\rho}\nabla\cdot\mathbf{u} = -\mathbf{u}\cdot\nabla w - \frac{p}{\rho}\nabla\cdot\mathbf{u} + \mathbf{u}\cdot\mathbf{b}
$$
$$
\Longrightarrow\;\, \frac{D\mathbf{u}}{Dt} = -\nabla w + \mathbf{b}.
$$
P.S. Note that $D\epsilon/Dt = \partial\epsilon/\partial t + \mathbf{u}\cdot\nabla\epsilon$, which is different from $\partial\epsilon/\partial t + \nabla(\epsilon\mathbf{u})$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/387069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the speed of sound constant? I was looking at lightning, and started to wonder if the speed of the thunder slowed down as it lost energy traveling far distances. I know the amplitude of sound decreases, perceived as volume. Im not certain, however, how to actually calculate the distance of a lightning strike based off of of the interval of time between observing the flash and hearing the thunder. Would this time be linear ( is the speed of sound constant?), or is it non-linear (Speed of sound loses velocity over time?)
If I were to determine this by comparing two audio recordings of the same lightning strikes' thunder, and seeing if the further one was lower in frequency, would that accurately indicate a deceleration of the sound?
| Strictly speaking, the thunder propagation velocity does decrease with distance, as initially lightning generates a shock wave in air, whose propagation velocity is higher than the velocity of sound, however, such shock waves get weaker with distance and become ordinary sound waves at a distance of just about 10 m from the lightning (http://lightningsafety.com/nlsi_info/thunder2.html). For a nuclear blast, this effect of shock wave deceleration with distance is much more significant. See the relevant formulas at https://www.metabunk.org/attachments/blast-effect-calculation-1-pdf.2578/
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/387328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Casimir Operators and the Poincare Group Following along in QFT (Kaku) he introduces the Casimir Operators (Momentum squared and Pauli-Lubanski) and claims that the eigenvalues of the operators characterize the irreducible representations of the Poincare Group. How exactly does this correspondence between eigenvalues and irreducible representations work?
| Maybe the best way to function is by analogy. For angular momentum the Casimir operator is
$$
L^2=\sum_i L_i L_i
$$
with eigenvalue $L(L+1)$. Thus, it is possible to recover the representation label $L$ from the eigenvalue of the Casimir. Since the Casimir is diagonal and proportional to the unit operator in the representation, one can use any state to recover the eigenvalue, i.e. the eigenvalue $L(L+1)$ can be obtained from the action of $L^2$ on any $\vert LM\rangle$ state.
More generally, there is a quadratic Casimir invariant given by
$$
C^{2}=f^i_{jk}f^j_{i\ell} X^k X^\ell
$$
where $f^i_{jk}$ are structure constants and $X^k$ the generators.
The number of Casimir operators is the same as the rank of the Lie algebra and thus of the Lie group. Hence, for SU3 there are two Casimir operators. If we take $F_i=\frac{1}{2}\lambda_i$ with $\lambda_i$ a Gell-Man matrix, then
$$
C^2=\sum_k F_kF_k\, , \qquad
C^3=\sum_{jk\ell} d_{jk\ell} F_jF_k F_\ell
$$
where the $d_{jk\ell}$ coefficients are defined by
$$
\{\lambda_j,\lambda_k\}=\frac{4}{3}\delta_{jk}+2d_{jk\ell} \lambda_\ell
$$
For any state in the irrep $(\lambda,\mu)$, the eigenvalues of $C^2$ are $C^3$ are, respectively,
$c^2=\frac{1}{3}(\lambda^2+\mu^2+3(\lambda+\mu)+\lambda\mu)$ and
$c^3=\frac{1}{18}(\lambda-\mu)(3+\lambda+2\mu)(3+\mu+2\lambda)$. Hence given these eigenvalues one can in principle recover the labels $\lambda$ and $\mu$.
It's the same idea for Poincare. It has two Casimir operators and their eigenvalues will usually be some functions of the two irrep labels. Given these two eigenvalues on then in principle recover the labels. The precise form of the eigenvalues depend on the definition (and normalization) of the Casimir but once this is fixed it's a problem of two (polynomial) equations in two unknowns.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/387481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Electromagnet would emit light? Light is an Electomagnetic wave. When I create an electromagnet by passing electricity wound around a core and keep changing the electric field, does it emit photons?
Is the frequency of electromagnetic radiation equals the frequency of change in the electric field? If yes can it emit visible light if the frequency is in that range?
| Your question seems to be partly about light and photons. Light, of course, consists of photons. But any electromagnetic wave consists of photons; it's just that some photons carry very little energy (at low frequency and long wavelength) and some carry a lot of energy (at high frequency and short wavelength). When the current through an electromagnet is changing, the electromagnetic field is changing, so waves - photons - are emitted; but the emitted photons are very low frequency and certainly not visible light photons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/387569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
A confusion about the naming of orbitals and the values of magnetic quantum number From physics courses, I'm told that (for Hydrojen atom only), for a given $\vec L$ of the electron,
$$|\vec L| * cos(\theta) = m_l,$$
where $\theta$ is the angle between the magnetic field and the orbital angular momentum $\vec L$.
Therefore, for example, if $\vec L$ lies on the $x-y$ plane, $L_z$ should be zero. However, when we see the value of $m_l$ for, say for $p$, subshell, they are $1, 0, -1$, and generally the orbitals are named as $p_x, py_, p_z$, and this confuses me. I mean if $\vec L$ is in the $z$ direction (i.e in the direction of $\vec B$), then $m_l = \pm 1$, and if it is on the $x-y$ plane, then $m_l = 0$, but naming implies as the values of $m_l$ are matched one-to-one to the names $p_x, p_y, p_z$, so am I confusing anything here, or the problem is just a silly naming of the orbitals ?
| The $Y_{\ell m}$ are the spherical harmonics, with $Y_{1, -1}$ and $Y_{1, 1}$ rotating in opposite directions, eigenfunctions in spherical symmetry. Below is a figure of one of these hydrogen $2p$ orbitals, with complex phase coded as color. In an animation (when multiplied with the time dependent complex phase), the colors would circulate in one direction. For the other $Y_{\ell m}$ it would go in the other direction.
Now if these two spherical harmonics are added, this results in a standing wave, with static nodes and with static lobes of electron density, for example in the $x$ direction. Or when one subtracts them (adding with the opposite phase) an orbital with static lobes in the $y$ direction. The two linear combinations are appropriate to use in molecules where the spherical symmetry is broken.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/387701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why exactly do atomic bombs explode? In atomic bombs, nuclear reactions provide the energy of the explosion. In every reaction, a thermal neutron reaches a plutonium or a uranium nucleus, a fission reaction takes place, and two or three neutrons and $\gamma$ radiation are produced. I know that it happens in a very short time, and an extreme amount of energy is released which can be calculated from the mass difference between $m_\mathrm{starting}$ and $m_\mathrm{reaction\ products}$.
So my question is: Why exactly does it explode? What causes the shockwave and why is it so powerful? (Here I mean the pure shockwave which is not reflected from a surface yet) I understand the reactions which are taking place in nuclear bombs but I don't understand why exactly it leads to a powerful explosion instead of just a burst of ionising radiation.
|
I don't understand why exactly it leads to a powerful explosion instead of just a burst of ionising radiation.
This radiation, representing most of the initial energy output by a nuclear weapon, is swiftly absorbed by the surrounding matter. The latter in turn heats almost instantly to extremely high temperature, so you have the almost instantaneous creation of a ball of extremely high kinetic energy plasma. This in turn means a prodigious rise in pressure, and it is this pressure that gives rise the blast wave.
The same argument applies to the neutrons and other fission fragments / fusion products immediately produced by the reaction. But it is the initial burst of radiation that overwhelmingly creates the fireball in an atmospheric detonation, and the fireball that expands to produce most of the blast wave.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/388164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 4,
"answer_id": 0
} |
Projectile motion explanation I’m studying projectiles at the moment and I am sure this is a very simple question, but can someone explain if I have a light object (a tennis ball) and a heavier object (a similar size solid steel ball) and launch them at the same initial velocity and the same angle, will the range be the same? Any why? (Neglecting air resistance etc)
| It seems you haven't yet learnt about forces. Need not worry!
In projectile kinematics the motion of a particle is based on certain parameters. And acceleration is one of them. In vertical projectiles the gravitational pull by the earth $mg$ acts on the body.
'Favourite Man' Newton now comes into the scene and states in its second law that
The acceleration of an object as produced by a net force is directly proportional to the magnitude of the net force, in the same direction as the net force, and inversely proportional to the mass of the object (considering mass is constant).
So we get
$ \vec {a}=\frac {\vec {F}}{m} $
In your car we will have $F=mg$. This will leave us with the value of acceleration as $g$ irrespective of the mass(since masses will get cancelled out). So both lighter and heavier particle will have the same range.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/388458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Isothermal compression without a heat reservoir I have devised a method to isothermally compress a gas without the use of a heat reservoir.
Consider a container of gas. To compress the gas normally, one would simply move one of the walls of the container inwards, which will do work on the gas when the gas particles collide with the moving wall, increasing its temperature.
However, consider this. Whenever I move the side of the container, I do it when none of the particles are touching that wall, then I move it to right next to the nearest particle. Thus, none of the particles collide when the wall is moving. I can continue doing this until I achieve the volume I want to compress to. This doesn't violate the ideal gas law as the pressure still increases due to increased frequency of collision, but the temperature of the gas should remain constant because there is no work done on the gas! Thus, I have achieved an isothermal compression of the gas without the use of a heat reservoir.
Is this method valid? What are the implications? If its invalid, why?
| Assuming there is only one particle in the container and you can wisely move the piston without colliding with the particle, you then claim that there is no work done.
But don't miss the other side. Macroscopically, with the space reduced, the frequency that the particle collides with the piston increases. There is more pressure or force to push the piston back. So you need increase the external force in order to maintain the piston's position. Therefore, in your next maneuver, there is a force (external force) applied to the piston so the work is not zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/388552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Question about Charge and Gauge Transformation Does gauge invariance imply charge neutrality? I understand that all physical observables must be gauge invariant. Does this mean that physical observables must be neutral?
If a quark is in red, a gauge transformation can transform it into blue. But gauge transformation cannot change any observable. Thus, colour of the quarks cannot be an observable.
Is the electric charge of an electron an observable in QED? Is that correct that all observables in QED must be neutral? Are magnetic monopoles observables?
| You are right of course! Physical observables must be gauge invariant. But this does not mean that they must be neutral. They could be charged under the global symmetry and be neutral under the local gauge symmetry.
In particular, a local gauge symmetry is generated by a function $\alpha(x)$ where $\alpha(x) \to 0$ as $|x| \to \infty$. A global symmetry of course has $\alpha(x) = $ constant which does not satisfy the above property. One way to have a charged gauge invariant operator is to connect it to a Wilson line that joins the operator to a point at infinity.
To add a bit more detail, a Wilson line $W_{{\cal P},q}(x_1,x_2)$ is a line operator (defined along a path ${\cal P}$) that under a gauge symmetry transforms as (assuming abelian symmetry for simplicity)
$$
W_{{\cal P},q}(x_1,x_2) \to e^{- i q \alpha(x_1) } W_{{\cal P},q}(x_1,x_2) e^{ i q \alpha(x_2) } .
$$
A charged local operator transforms under gauge symmetry as
$$
{\cal O}(x) \to e^{ - i q \alpha(x) } {\cal O}(x) .
$$
where $q$ is the charge of the state. We now construct the operator
$$
{\tilde {\cal O}}(x) = W_{{\cal P},q}(\infty,x){\cal O}(x)
$$
This transforms as
$$
{\tilde {\cal O}}(x) \to e^{ - i q \alpha(\infty)} {\tilde {\cal O}}(x) .
$$
Then, ${\tilde {\cal O}}(x)$ is invariant under local gauge transformations but not invariant under global symmetry transformations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/388693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why not quarter-life? The number of nuclei left after time $t$ in radioactive decay is given by:
$$N(t) = N_0 e^{-t/ \tau}$$
Now if we put $N(t)$ as $\dfrac{N_0}2$, we get half-life. But, if we had put $\dfrac{N_0}4$, we would have quarter-life, which is also independent of $N_0$.
Is there anything special about half-life as opposed to quarter-life
| Radioactive decay is an example of an exponential random process. Two key statistics for any exponential random process are the median and the mean. (The standard deviation is equal to the mean for an exponential random process.)
The half life $t_{1/2}$ is the median. The time constant $\tau$ is the mean. There's nothing particularly meaningful about the quarter life.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/388793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does a heat stone heated in microwave oven emits microwave? I have a heat stone which functions just as heating pads.
One concern is that it is heated in microwave.
Sometimes I give it to my kids.
One day a thought came to my mind that what kind of wave does this stone emit after being heated in a MWO?
If it emits wave of 2.45GHz or something because it is heated in a MWO, is it still safe for human cells or organs?
(of course the guide book says it emits infrared waves after being heated...how could I check it?)
| Object which are heated in microwave ovens do not subsequently emit microwaves.
The only thing a microwave oven does is make the molecules of a substance jiggle faster, which makes the substance hotter. It is particularly good at heating water molecules because they have a large permanent electric dipole moment - so foods with high moisture content are heated more efficiently than foods which are dry.
We cannot see infrared radiation, be we can perceive it as radiated heat. It's how things are cooked in a broiler. You yourself are currently emitting infrared radiation - it's how pit vipers see their prey in the dark.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How does a receiving antenna work given that the electric field is 0 in conductors? The question of how a receiving antenna works has been asked on this site before, such as here How does a receiving antenna get an induced electric current? and here How does a receiving antenna work?. I understand the basic principle that the external EM field from the transmitting antenna causes electrons to move in the receiving antenna, creating a current.
My question though is in the title. How can an external electric field, as in the form of a radio or other EM wave, induce a current in a wire when the $E$ field is always zero in there? In terms of physical laws and math how can I calculate the current as a function of time if I know the external fields as a function of time?
I wanted to try to calculate the potential difference between two points of a wire from the external field using
$$\varepsilon=\int_{\text{start point}}^{\text{end point}}\mathbf{E}\cdot\text{d}\mathbf{l}$$
but that assumes that the electric field in the wire is as it would be if the wire weren't there and the waves were propagating through vacuum...
| There are multiple ways in which this is not a contradiction.
*
*By “inside a conductor”, we are referring not to the conductor as a whole but rather the interior volume as opposed to the surface. If the wire segment has a net charge, that charge will be found on the surface.
*The reason the field is said to be zero is that charges move as needed to bring it back to zero. Motion of charges is current. Current is what we hope to get from an antenna.
(The field is exactly zero only in the electrostatic, equilibrium case, where the electrons have all settled down and stopped moving.)
A source for both claims
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Hubble Parameter as a function of the scale factor in Lambda CDM Model Basically I am trying to plot $H/H_0$ versus $a(t)$ for the Lambda CDM Model. In a paper I am referring to $H/H_0$ decreases with increasing $a(t)$ until a point ($a[t]\sim 0.7$) and then it starts increasing again until today ($a[t]=1$).
When I try doing the same plot using Wikipedia's page on the Lambda CDM Model:
https://en.wikipedia.org/wiki/Lambda-CDM_model
(using the expression for $H[a]$ as a function of $a$ in the minimal 6 parameter model); all I get is a decreasing function of time, there is no point where the function $H/H_0$ begins to increase again.
What am I doing wrong? Any help or link to such a graph would be greatly appreciated. Thank you!
| Below I plot the quantities $H$, $\mathcal{H} = aH$ and $q$
$$
q = -\frac{\ddot{a}a}{\dot{a}^2}
$$
$q$ is known as the deceleration parameter and gives you information about the concavity of $a$: the acceleration
Note that around $z \approx 0.7$ the sign of $q$ changes, meaning that at this redshift the universe starts to accelerate
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between positive negative potential and positive, negative work done? if work is done along the direction of force,then the work is regarded as positive work and if work is done in a direction opposite to the direction of force then it's regarded as negative work.Whereas in electrostatics, if a positive charge is brought near a positive charge(which produces an opposing force) the work(electric potential) is regarded as positive and if the same positive charge is brought near a negative charge then the its regarded as negative work(potential).Just the opposite
|
Whereas in electrostatics, if a positive charge is brought near a positive charge(which produces an opposing force) the work(electric potential) is regarded as positive.
Here, the work done by the external force (which is you pushing the charge) is positive, cause you displace the charge along the direction in which you apply force. On the contrary, the work done by electrostatic repulsive force is negative ...
and if the same positive charge is brought near a negative charge then the os regarded as negative work(potential)
Here, you the force is attractive. When you are included (an external force), it always implies an opposite force to the electrostatic force. So the positive charge eventually displaces into the direction of negative charge. But you try to pull it upwards ... You need not push it towards the negative source as the force here is already attractive ...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the difference between $T^\mu{}_\nu$ and $T_\nu{}^\mu$? I do understand why the horizontal order matters for indices on the same vertical position, e.g.:
$$T\left(V_{(1)},V_{(2)}\right) = T_\color{red}{\mu\nu}V^\mu_{(1)}V^\nu_{(2)} \neq T_\color{red}{\nu\mu}V^\mu_{(1)}V^\nu_{(2)} = T\left(V_{(2)},V_{(1)}\right)$$
But I don't understand why $T^\mu{}_\nu \neq T_\nu{}^\mu$ in general. The way I see it, both are linear maps from a vector and a dual vector to $\mathbb{R}$. The horizontal order of the indices shouldn't matter because the vertical position already specifies whether it refers to the vector index or the dual vector index:
$$T(\omega,V) = T^\color{red}\mu{}_\color{red}\nu \omega_\mu V^\nu = T_\color{red}\nu{}^\color{red}\mu \omega_\mu V^\nu = T(\omega,V)$$
| The difference is $(T^{\mu\rho}-T^{\rho\mu})g_{\rho\nu}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Why is the singlet state for two spin 1/2 particles anti-symmetric? For two spin 1/2 particles I understand that the triplet states ($S = 1$) are:
$\newcommand\ket[1]{\left|{#1}\right>}
\newcommand\up\uparrow
\newcommand\dn\downarrow
\newcommand\lf\leftarrow
\newcommand\rt\rightarrow
$
\begin{align}
\ket{1,1} &= \ket{\up\up}
\\
\ket{1,0} &= \frac{\ket{\up\dn} + \ket{\dn\up}}{\sqrt2}
\\
\ket{1,-1} &= \ket{\dn\dn}
\end{align}
And that the singlet state ($S = 0$) is:
$$
\ket{0,0} = \frac{\ket{\up\dn} - \ket{\dn\up}}{\sqrt2}
$$
What I'm not too sure about is why the singlet state cannot be $\ket{0,0}=(\ket{↑↓} + \ket{↓↑})/\sqrt2$ while one of the triplet states can then be $(\ket{↑↓} - \ket{↓↑})/\sqrt2$. I know they must be orthogonal, but why are they defined the way they are?
| According to your last question, the singlet state
$\newcommand\ket[1]{\left|{#1}\right>}
\newcommand\up\uparrow
\newcommand\dn\downarrow
\newcommand\lf\leftarrow
\newcommand\rt\rightarrow
$
$
\ket{0,0} = \frac{\ket{\up \dn} + \ket{\dn\up}}{\sqrt2}
$
cannot be valid, while one of the triplet states (assume it the $\ket{1,0}$) could be written as
$
\ket{1,0} = \frac{\ket{\up\dn} + \ket{\dn\up}}{\sqrt2}
$
as it can be shown below.
Defining the spin-exchange operator as
$P\mid \chi_{\uparrow\downarrow} \rangle = \mid\chi_{\downarrow\uparrow} \rangle , P\mid \chi_{\downarrow\uparrow} \rangle =\mid \chi_{\uparrow\downarrow} \rangle $
which implies
$P\mid \chi_{\text{sym.}} \rangle = \mid \chi_{\text{sym.}} \rangle , P \mid \chi_{\text{asym.}} \rangle = -\mid \chi_{\text{asym.}} \rangle$
The above singlet state becomes,
$
P \ket{0,0} = \frac{\ket{\dn\up} + \ket{\up \dn} }{\sqrt2} \neq -\ket{0,0}.
$
whereas, for the second triplet state we write,
$
P \ket{1,0} = \frac{ \ket{\dn\up} + \ket{\up\dn} }{\sqrt2} = \ket{1,0}.
$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/389946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Is there an RMS value for power delivered to an inductor? My textbook defines power delivered to an inductor as:
$$P= V_{L\rm\ peak}I_{\rm peak} \cos ( \omega t) \sin( \omega t)$$
where $\omega$ is angular frequency.
but makes no mention of $P_{RMS}$. It simply says that $P_{av}$ is zero (which makes sense since it's defined as the product of two circular functions).
However, when we covered power, current, and voltage delivered to a resistor in an AC circuit, we used RMS values for current and voltage, and an average value for power. This made sense since power delivered to a resistor is a function of a squared sinusoidal function, so average was adequate.
In this section (inductors in AC circuits), only instantaneous power was discussed. This seemed odd to me. In previous sections the book discussed how taking an average of a sinusoidal function just returns zero, which is why we use RMS values instead. That makes perfect sense, so why not apply that approach here? Do we not care about RMS power? if so, why not?
They did say that the average power is given by $I_{rms}r$ where $r$ is internal resistance, assuming internal resistance is substantial. I'm curious about cases where internal resistance is negligible.
| With circuit components, we're typically interested in the amount of energy that's dissipated over time. If we have an average power $\langle P\rangle$, then the energy dissipated over time $t$ is simply $\langle P\rangle t$. RMS power is useless here: there's no way to go directly from RMS power to figure out how much energy is dissipated.
RMS voltage and current are useful because the average power dissipated in a resistive component is given by: $$ \langle P\rangle={V_{RMS}^2\over R}=I_{RMS}^2R$$
Inductive components have $\langle P\rangle=0$, and so the energy dissipated over any time interval is zero. This doesn't mean that the average is "insufficient," but instead that inductive components don't dissipate energy. The power put into a perfect inductor is stored in its magnetic field, and can be recovered without losses.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/390171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Potential Difference due to a infinite line of charge When a line of charge has a charge density $\lambda$, we know that the electric field points perpendicular to the vector pointing along the line of charge.
When calculating the difference in electric potential due with the following equations.
$$\nabla V=-\vec{E}$$
Therefore
$$\Delta V = -\int_{\vec{r_o}}^\vec{r_f}E\cdot \vec{dr}$$
knowing that
$$\vec{E} = \frac{\lambda}{2\pi\epsilon_or}\hat{r}$$
and that
$$\left\lVert\vec{r_f}\right\lVert < \left\lVert\vec{r_o}\right\lVert $$
Carrying out the integration (Hopefully correctly) I got
$$\Delta V = \frac{\lambda}{2\pi \epsilon_o} \ln(\frac{r_f}{r_o})$$
What confuses me is that the $\ln()$ is negative. I assume that the value should be positive since we move closer towards the line of charge should give us a positive change in electric potential. My best guess for my problem is that I missed a negative somewhere, but looking at online solutions they've got the same answer that I got.
| To elaborate a bit on Bill's comment, you might consider a curve defined as follows in some cylindrical $(r,\theta,z)$ coordinate system:
$$\gamma(t) = \big(r(t),\theta(t),z(t)\big) = (t, 0, 0)$$
$$ t \in [r_0,r_f]$$
The tangent vector to this curve is
$$\frac{d\vec r}{dt} = \hat r $$
so
$$\Delta V = -\int_\gamma \vec E \cdot d\vec r = -\int_{r_0}^{r_f} \vec E \cdot \frac{d\vec r}{dt} dt = -\frac{\lambda}{2\pi\epsilon_0}\int_{r_0}^{r_f} \frac{dt}{t} = -\frac{\lambda}{2\pi\epsilon_0}\ln\left(\frac{r_f}{r_0}\right) $$
$$ =\frac{\lambda}{2\pi\epsilon_0}\ln\left(\frac{r_0}{r_f}\right) $$
Whenever things like this happen, I find it useful to introduce an explicit, unambiguous parameterization of my curve, which usually resolves the issue.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/390345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can electric field be negative? According to the equation ,
$$E = kQ/r^2$$
If the source charge is negative electric field produced by the charge must also be negative. My teacher said electric field can never be negative, it'll either be positive or zero. Online sources pointed out that since electric field is a vector when doing calculation we only report the magnitude.
Another doubt was with electrostatic force is,why is it always positive? According to columb's law force is directly proportional to modulus of product of charges. Can't it be negative like attractive and repulsive forces.
| Try to ask yourself the question: what does it mean that anything is "negative"? The term "negative" has no physical meaning in itself before we define it to mean something.
*
*How does a negative number (scalar) make physical sense? What does $-2\;\mathrm{kg}$ or $-10\;\mathrm{apples}$ mean? We can choose to understand it as the loss of an amount when it fits the context.
*How does a negative arrow (vector) make physical sense? What does
$-\vec F$ or $-\vec v$ or $-\vec E$ mean? We can choose to define it as the opposite of the vector, meaning the same vector in the opposite direction.
And so, a negative vector - or more precisely: the negative of a vector - has been defined to mean: The same vector in the opposite direction.
Now that we have a chosen definition, we can use any vector quantity with signs. Forces, velocities and also fields, including electric fields, are represented by vectors. A negative electric field just means: a field pointing/pushing opposite to what a positive field would do.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/390461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 1
} |
Why does oil float on water? This might be a silly question but I want to know why oil actually floats on water. I tried to explain it to myself using Archimedes' principle but that didn't help.
Archimedes’ principle, physical law of buoyancy, states that any
body completely or partially submerged in a fluid (gas or liquid) at
rest is acted upon by an upward, or buoyant, force the magnitude of
which is equal to the weight of the fluid displaced by the body.
I don't get how Archimedes' law is valid in oil-water case, because oil and water don't even mix so there's no displacement of water hence no byouant force is exerted. So what keeps substances like oil which are less dense than water floating atop it?
| Water is heavier that oil for the same unit volume due to its higher density. Due to its larger mass, it settles at the lowest level to have the smallest potential energy and it able to do so as water is fluid. So water body is positioned below the oil body.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/390547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
How to code Tensor Networks? I'm interested in learning tensor networks, I've been reading some introductory articles about this. The problem is that these articles mostly discuss the theoretical definitions for tensor networks such as MPS, PEPS, etc.. The problem is that discussions regarding how to program these for obtaining ground states in condensed matter physics are rather concise. Even for a simple wave function I'm lost on how to calculate the SVD (the index juggling confuses me a bit). I'm also interested in learning MERA and TEBD, but again, the discussions are mainly theoretical.
Are there any articles or maybe blogs discussing implementations of Tensor networks in code? maybe in python so these codes are easily accesible?
What would be a good way to learn how to program this algorithms?
| An online platform where you can learn about tensor networks, their definitions, index juggling, Python/Matlab/Julia codes describing MERA, TRG, TNR, Exact Diagonalization is -- https://www.tensors.net/.
A very useful routine (which the above website uses for handling and contracting indices is known as "NCON" mentioned in https://arxiv.org/abs/1402.0939
You must be patient and follow the missing links in your understanding using review articles such as -- https://arxiv.org/abs/1306.2164
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/390646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Sound an amplifier makes when you plug / unplug a cable When you plug, unplug or even touch a jack cable of an aplifying system with speakers, one can hear a low-pitch sound that is of roughly always the same frequency, which does not seem to depend on the device (Hi Fi chain, guitar amplifier...)
*
*How is this sound produced ?
*Is there a particular reason the frequency of the signal is always the same ?
| Your body acts as an antenna and depending on your location your surrounding is covered by 50,60 Hz EM waves. That could be what you are hearing. There are also other signals but eiher they are too weak or out of hearing spectrum. You can also confirm this by touching an oscilloscope probe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/391348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How come the following equation produce a straight line? Time period for bar pendulum= T;
$$T=2\pi\sqrt\frac{\frac{k^{2}}{l}+l}{g}$$
where,
l=distance of center of gravity(C.G.) from point of suspension
k=radius of gyration about an axis passing through the CG of the body
upon solving,
$$lT^{2}=\frac{4\pi^{2}}{g}l^{2}+\frac{4\pi^{2}}{g}k^{2}$$
and this equation produces a graph of straight line. but it was supposed to be quadratic,i guess. I checked for homogeneous equation of second degree, but it did not pass the test.
| Check the x-axis variable - it's not linear, instead it's $T^2$ and y-axis reflects this by its notation $l(T^2)$. The x-axis variable is chosen like this in order to make the dependence linear, so the coefficient of $l$ can be solved by linear fit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/391480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Notation of Maxwell relations The Maxwell relations are often given as for example
$$\left(\frac{\partial T}{\partial V}\right)_S = -\left(\frac{\partial P}{\partial S}\right)_V.$$
What does the $S$ and the $V$ in the index of the parantheses mean? I guess that $S$ and $V$ should stay constant for the derivation, but is this not already in the definition of the partial derivative?
| Your system has two degrees of freedom. So any of your quantities $V$, $E$, $P$, $T$, $S$ can be viewed as a function of any two of the others. The expression
$$\left(\frac{\partial T}{\partial V}\right)_S$$
means "the derivative of $T$ with respect to $V$ when viewing it as a function of $V$ and $S$ (i.e. $T(V,S)$)". Likewise
$$\left(\frac{\partial P}{\partial S}\right)_V$$
is the derivative of $P(S,V)$ with respect to $S$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/391732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
In 1D wave mechanics, is there a counterexample to the relation $m \frac{d \langle x\rangle}{dt} = \langle p \rangle$? The standard physicists' proof of the identity $m \frac{d\langle x\rangle}{dt} = \langle p \rangle$ involves integration by parts. For example, in Griffiths's "Introduction to Quantum Mechanics", the derivation goes as follows:
\begin{equation}
\begin{split}
m\frac{d\langle x \rangle}{dt} &= m\int x \frac{\partial|\psi|^2}{\partial t} dx\\ &= \frac{i\hbar}{2}\int x\frac{\partial}{\partial x}\left(\psi^\ast\frac{\partial\psi}{\partial x}-\frac{\partial\psi^\ast}{\partial x}\psi\right) dx\\
&= -\frac{i\hbar}{2}\int \left(\psi^\ast\frac{\partial\psi}{\partial x}-\frac{\partial\psi^\ast}{\partial x}\psi\right) dx\\
&= -i\hbar \int \psi^\ast\frac{\partial\psi}{\partial x} dx\\
& = \langle p \rangle,
\end{split}
\tag{1}
\label{a}
\end{equation}
Here, (among other things) one should integrate by parts to obtain the third line, where the associated boundary term is assumed to vanish, i.e.,
\begin{equation}
x\left(\psi^\ast\frac{\partial\psi}{\partial x}-\frac{\partial\psi^\ast}{\partial x}\psi\right) \Bigg|_{x=-\infty}^{\infty} = 0.
\tag{2}
\label{b}
\end{equation}
But is it really OK to make such an assumption? In fact, for the normalizable wave function
\begin{equation}
\psi_1(x) = \frac{e^{ix^4}}{x^2 + 1},
\tag{3}
\end{equation}
the boundary term [Eq. $(\ref{b})$] does not vanish, making the whole derivation in Eq. $(\ref{a})$ invalid. Still, it is easy to see that $\langle p \rangle$ itself is ill-defined for the above wave function (i.e., the integral $\langle\psi_1|p|\psi_1\rangle$ is not convergent), so this counterexample is not very interesting.
Hence, my question is the following:
Is it possible to construct a counterexample to the relation $m \frac{d\langle x\rangle}{dt} = \langle p \rangle$, where both $\langle x\rangle$ and $\langle p\rangle$ are well-defined?
| I would say the usual proof of this statement comes from Ehrenfest's theorem:
$\frac{d<Q>}{dt} = -\frac{i}{\hbar} [Q,H]$
Then with the usual single particle Hamiltonian one has $H=\frac{p^2}{2m} +V(x)$ and so $\frac{d<X>}{dt}=\frac{1}{2m}[p^2,x]$ This evaluate via standard commutation rules to your identity $m\dot{<x>}=<p>$.
At no point here did we invoke integration by parts. The proof of Ehrenfest's theorem, which does involves inner products (and so carries the risk of IBP), simply requires that all the inner products exist and this is equivalent to the statement that our wavefunctions are normalisable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/391856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Is this interpretation of quantum fluctuation in eternal inflation in Wikipedia correct? Wikipedia's article on inflation says
Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume.
But from Sean Carroll’s article,
Eternal inflation is a different story. The idea there is that the inflaton field slowly rolls down its potential during inflation, except that quantum fluctuations will occasionally poke the field to go higher rather than lower. When that happens, space expands faster and inflation continues forever. This story relies on the idea that the “fluctuations” are actual events happening in real time, even in the absence of measurement and decoherence. And we’re saying that none of that is true. The field is essentially in a pure state, and simply rolls down its potential
So, I asked a friend of mine who knows QFT and he said
I never liked the concept of quantum fluctuations, especially when it comes to cosmology. In QFT, the fields always roll down to the exact minimum of the potential. It doesn't fluctuate in any meaningful sense. But the potential is the quantum mechanical one, not the classical one. The quote in the wikipedia may be a vague way to say that the classical potential acquires quantum corrections. Whether that picture is useful or not is beyond me.”
Is the interpretation of quantum fluctuation in Wikipedia correct from the point of view of QFT?
Or does the inflaton field just simply roll down its potential without any effects from quantum fluctuation?
| Classically, a particle rolls down a potential viz. $\frac{d}{dt}\mathbf{p}=-\boldsymbol{\nabla}V$. The equivalent for a classical field is $\frac{d}{dt}\pi=-\frac{\delta V}{\delta\phi}$. The first of these equations is quantum-corrected to $\frac{d}{dt}\langle\mathbf{p}\rangle=-\langle\boldsymbol{\nabla}V\rangle$, which allows for quantum tunnelling. Similarly, the second equation becomes $\frac{d}{dt}\langle\pi\rangle=-\langle\frac{\delta V}{\delta\phi}\rangle$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/391946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Calculating the matter-energy density of the Universe Reading about the value $\Omega$ which is the ratio between the mass-energy density of the universe and the required mass - energy density of the universe to ensure linear expansion. I understand that if the value of $\Omega$ < 1 then space-time will warp into a saddle shape and vice-versa.
My question is, how is the mass-energy density of the universe measured / predicted / calculated if we are restricted to the observable universe and its CMB?
| The CMB has a lot of information in it, that's the reason missions such as COBE, WMAP or Planck are so important. Simply put the CMB is the result of the interaction among the various components in the universe before decoupling. This implies that a slight different universe to ours would generate a whole different power of fluctuations.
The image below shows the behavior of the various peaks in the CMB for different cosmological models
The idea is then select the best cosmological model based on the best observations available for the fit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/392052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Information from four point correlation functions in Ising model For a one-dimensional classical Ising model with the Hamiltonian $$H=-J \sum_{i}\sigma_{i} \, \sigma_{i+1}$$ where $\sigma=\left\{+1,-1\right\}$ one can calculate two point correlation for the spins $$\left<\sigma_{i} \, \sigma_{j}\right>.$$ I understand the meaning for this is that how two spins at different positions are correlated or in other words how fluctuations at the ${i}^{\text{th}}$ position affects the the spin at the position $j$.
Now, what is the physical meaning of four point correlation function $$\left<\sigma_{l} \, \sigma_{m} \, \sigma_{n} \, \sigma_{p}\right>.$$ What extra piece of information does it give? Can some explain intuitively?
| Let me answer a more general question (which might not be what you are after...): what information is encoded in general correlation functions $\langle\sigma_A\rangle$, where $A$ is a finite set of vertices and $\sigma_A=\prod_{i\in A} \sigma_i$?
It turns out that one can prove (it's actually easy) that, for any local function $f$ (that is, any function depending only on finitely many spins), one can find (explicit) coefficients $(\hat{f}_A)_{A\subset\mathrm{supp}(f)}$ such that
$$
f(\sigma) = \sum_{A\subset\mathrm{supp}(f)} \hat{f}_A \sigma_A
$$
where $\mathrm{supp}(f)$ is the (finite) set of spins on which $f$ depends.
This means that knowing the correlation functions $\langle\sigma_A\rangle$ for every finite set $A$ allows you to compute the expectation of any local function $f$:
$$
\langle f\rangle = \sum_{A\subset\mathrm{supp}(f)} \hat{f}_A \langle\sigma_A\rangle .
$$
In this sense, the correlation functions $\langle\sigma_A\rangle$ contain all the information on the Gibbs measure.
(Let me emphasize that everything I said is completely general and not restricted to the one-dimensional model.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/392177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relativistic velocity addition from time dilation I'm trying to derive the relativistic velocity addition equation using the time dilation equations but I get a wrong result.
Assume the widely used scenario in which there are 3 clocks: O, A and B.
A is moving relative to O at velocity $v$.
B is moving relative to A at velocity $w$.
Find the velocity of B relative to O (in O's frame of reference).
The time dilation equations says:
$$t_A = t_O \sqrt{1-v^2/c^2}$$
$$t_B = t_A \sqrt{1-w^2/c^2}$$
$$t_B = t_O \sqrt{1-u^2/c^2}$$
From these 3 equations we want to derive $u$. Then we get:
$$t_O \sqrt{1-u^2/c^2} = t_O \sqrt{1-v^2/c^2} \sqrt{1-w^2/c^2}$$
$$1-u^2/c^2 = (1-v^2/c^2) (1-w^2/c^2)$$
$$1-u^2/c^2 = 1-v^2/c^2-w^2/c^2+v^2 w^2/c^4$$
$$u^2 = v^2 + w^2 - v^2 w^2/c^2$$
This is different from the equation
$$u = \frac{v + w}{1+v w/c^2}$$
Why doesn't the derivation of velocity using the time dilation equations alone work?
| The clocks on ships A and B are not just running slower compared to clock O. They are also offset from each other due to their different locations. If you look at the Lorentz transform for the time coordinate
$$t' = \gamma\left(t - \frac{xv}{c^2}\right),$$
you can see that the distance between the clocks is also a factor in how much the clocks differ in their readings. Since clocks A and B are moving at difference speeds, they must be at different locations, so the difference in time is not just due to a different running speed, but a difference in starting time (when the clocks would read zero). This is related to the relativity of simultaneity.
In short, relativity means you can't treat space and time separately, as a change in motion through one affects your motion through the other.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/392515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't a new ball-point pen write as smoothly as one being written for a little? Why doesn't a new ball-point pen write as smoothly as one being written for a little? You will say that the friction is more first up.Then why is that so?
| A ballpoint pen consists of an extremely hard ball (tool steel or tungsten carbide) seated in a soft metal shank tip (usually brass). This will form a low-friction bearing but to do so, some initial wear between them is required for the ball to "seat" properly in the shank. In the process of seating in, tiny asperities on the surface of the shank and the ball get embedded into the surface of the shank, which gradually reduces the amount of friction created as the ball rolls in the shank. Most ballpoint manufacturers strive to seat in the balls by writing with the pens in a machine briefly before packaging them. But if that process did not go to completion on the machine, then it will occur when you begin writing with the pen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/392636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What do quantum spin hamiltonians describe? I've learned all particles are either fermions or bosons, obeying their respective operator algebras, and then I've seen Hamiltonians describing models carrying one of these two types of particles. So far it made sense.
But then I started seeing Spin Hamiltonians describing, for example, a chain of spins or something like that... I learned how to do the math by example but didn't really understand what I was doing. Like, how to think about these objects, and what really are these objects? If all there is are either fermions or bosons, what are spins in these Hamiltonians? Also, what are spinless fermions and other variants like that? I'm looking to clarify some concepts in my mind... If you can help with that I'll be glad.
| Concerning the spinless fermions, it should be considered that Pauli's spin-statistics connection (fermions have half-odd integer spins, bosons have integer spins) applies to Lorentz invariant systems. So, it is possible to have fermions with $S=0$ in non-relativistic systems. Usually, this kind of fermions appears as auxiliary particles in the treatment of many body quantum systems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/393875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Confusion on quantum numbers So, I've known for a long time the famous quantum numbers $n, l, m, s$ and I thought these were all of the quantum numbers, and then when applying the Schrödinger equation to orbital angular momentum and getting the spherical harmonics, with their numbers $l$ and $m$, I thought, okay here they are that's all. But recently, I've been taught that angular momentum is not only composed of the orbital angular momentum, but also the intrinsic angular momentum, the spin, $\vec J = \vec L \otimes 1\!\!1 + 1\!\!1 \otimes \vec S$. And with this, I'm introduced too to the quantum numbers $j$ and another $m$, which I think can be specified by $m_j$, and also $s$ and $m_s$.
I'm confused by so many $m$'s. Are the quantum nubers that I initially wrote the only ones and the other ones can be derived from these ones? Are all $m, m_j, m_l, m_s$ different between them or there's one that englobes them all? Is the $s$ that you get from $\vec S$ the same $s$ as my initial one? What's the physical meaning of all these quantum numbers? Are there any other quantum numbers that I haven't encountered yet?
| Brief answer.
for each electron we assign $n$, $l$, $s$
e.g. 2p electron $n=2$, $l=1$, $s={1 \over 2}$
In the presence of a magnetic or electric field we need to think about $m_l$ and $m_s$ the projections of $l$ and $s$ in the direction of the magnetic or electric field
e.g. 2p electron $n=2$, $l=1$, $m_l = +1,0,-1$, $s={1 \over 2}$, $m_s=+{1 \over 2}, -{1 \over 2}$ - there are several possible $m_l$ and $m_s$ values.
If we consider all the electrons of an atom then we need to combine the individiual $l$ contributions of each electron to give $L$ and combine all the $s$ to give $S$... The total orbital ang. momentum and total spin ang. momentum combine to give the total ang. momentum $J$. - and there are, of course, $m_L$, $m_S$ and $m_J$ values... For an example see this calculation of the term states for carbon in the ground state
Note this is not as bad as it appears as each closed subshell contributes overall zero to $L$, $S$ and $J$. eg for carbon 1s$^2$, 2s$^2$, 2p$^2$ we only need to consider the two 2p electrons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/394244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does reversing time give parity reversed antimatter or just antimatter? Feynman's idea states that matter going backwards in time seems like antimatter.
But, since nature is $CPT$ symmetric, reversing time ($T$) is equivalent to $CP$ operation. So, reversing time gives parity reversed antimatter, not just antimatter.
What is happening here? Why does nobody mention this parity thing when talking about reversing time? What am I missing?
| The statement that antimatter is matter going back in time is usually associated with Feynman diagrams in QED, so we're talking about electrons, and electrons have parity +1, so:
$$ CPT = C1T = CT = 1 $$
So the parity part doesn't come into play, but it is required in general.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/394367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Very basic question on AdS/CFT I was going through the introductory material by Horatiu in Ads-CFT.
It says that $N+1$ D-branes are split into $N$ D-Branes and a probe D-Brane. The Wilson loop is located on the probe D-brane, which is at the Minkowski boundary of the AdS space.
The AdS space is given by $f^{-1/2}dx_{||}^2 + f^{1/2}(du^2 + d\Omega^2)$, where $f$ is the harmonic function = $\frac{R^4}{u^4}$.
My question is, what is causing this AdS metric (what is the source of the AdS space)? Is it the N D3 branes? or something else? If there is no source, then the space time would be flat.
Is there an assumption that the probe D3-brane is not modifying the metric of the AdS space at all?
Appreciate any clarification on this.
| Here is an answer to the question of why the $AdS_5\times S^5$ metric is appearing. This is taken almost directly from the TASI lectures I cite at the end.
If you consider N coincident Dp-branes, the background solution has a metric and dilaton which we can write as
$$ds^2 = H^{-1/2}(r)\left[-f(r)dt^2 +\sum_{i=1}^p(dx^i)^2\right]+H^{1/2}(r)\left[f^{-1}(r) dr^2+r^2 d\Omega_{8-p}^2\right]$$
$$e^{\Phi}=H^{(3-p)/4}(r)$$
with the warp-factors
$$H(r)=1+\frac{L^{7-p}}{r^{7-p}}, \quad f(r)=1-\frac{r_0^{7-p}}{r^{7-p}}$$
If you take $p=3$, such that you are considering now a stack of D3-branes and additionally take the so-called extremal limit ($r_0\rightarrow 0$), then this metric becomes identical to the one you are asking about. This isn't quite $AdS_5\times S^5$ yet. All you need to do now is to take the limit $\frac{r}{L}\rightarrow 0$ and you will be left with none other than
$$ds^2=\frac{L^2}{z^2}(-dt^2+d\vec{x}^2+dz^2)+L^2 d\Omega_5^2$$
which is the usual metric for $AdS_5\times S^5$.
References: "TASI Lectures: Introduction to the AdS/CFT Correspondence", https://arxiv.org/abs/hep-th/0009139
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/394643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What is the precise definition of a 4-vector? In Minkowski space, I know that there are some vectors such as the ordinary velocity that are not proper 4-vectors.
But what is the exact definition of a 4-vector? For any fixed numbers, say 1,2,3,4, does $(1,2,3,4)$ become a 4-vector in Minkowski space with the invariant inner product 28? I am confused.
| In Euclidean space, we can define vector as an object which transforms in a specific way under rotation.
To define vector in special relativity, we use Lorentz transformation instead of rotation. (Actually, Lorentz transformation is a kind of rotation in 4-dimensional space,)
Suppose that the events of stationary observer $O$ are given by $(t,x,y,z)$. Consider another frame $O'$ which moves along the x-axis with velocity $v$ and whose events are given by $(t',x',y',z')$. The Lorentz transformation between the two observers is:
$$t'=\gamma(t-vx/c^2),\ x'=\gamma(x-vt),\ y'=y,\ z'=z $$
From this, we can conclude that $(t,x,y,z)$ is a 4-vector.
Here is another example : Electromagnetic four potential is given by
$$ A_\mu=(\phi/c,A_x,A_y,A_z) $$
If this is a 4-vector, it must obey the Lorentz transformation rule, so that
$$ \phi'/c=\gamma(\phi/c-vA_x/c^2),\ A_x'=\gamma(A_x-v\phi/c),\ A_y'=A_y,\ A_z'=A_z $$
This conclusion can be derived by classical Electrodynamics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/394985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Minimum Thickness of Insulation of a Pipe We know that there is the critical radius of insulation where heat transfer is maximum at that radius. Heat Transfer vs Insulation Thickness]1
From the graph, we can see that as long as the radius of the insulation is between r1 (Radius of pipe) and r*, the insulation increases the heat transfer, thus for the insulation to be effective, the radius must be greater than r*. I cannot figure out a way to calculate r*. I have tried looking for the answer online, but it seems that every website only focuses on the critical value. They mention the minimum insulation thickness (r*) but they do not include any calculations. Is there a way to represent r* mathematically?
| While the full equation for heat transfer through insulation as $r_2$ changes is:
$$q_r = {T- T_\infty\over{{ln\big({r_2\over r_1}\big)}\over 2\pi Lk}+{1\over {h(2\pi r_2L)}}}$$
(ref), differentiating $q_r$, i.e. $dq_r/dr_2$ gives this equation also found here (under 'Insulation of cylinders'), the critical insulation thickness is:
$$r_{cr, cylinder} = {k\over h}$$
this reference calculates a reasonable maximum for critical thickness, when $k =0.05W/mK$, and $h = 5W/m^2K$, of $10mm$. The effects of radiation and forced convection both decrease the critical thickness value even further.
The critical thickness value is a good guideline for minimum insulation outer radius.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/395667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Geometric optics question (from 2001 GRE)
In the diagram above, two lenses with focal lengths $f_1 = 20$ cm and $f_2 = 10$ cm are placed $40$ cm and $70$ cm from an object $O$, respectively. Where is the final image formed by the two-lens system, in relation to the second lens?
(a) 5 cm to its right
(b) 13.3 cm to its right
(c) infinitely far to its right
(d) 13.3 cm to its left
(e) 100 cm to its left
I can use the thin lens equation $$\frac{1}{s} + \frac{1}{s'} = \frac{1}{f}$$ (where $s$ is the object distance, $s'$ is the image distance, and $f$ is the focal length, up to sign, which is what makes this business confusing) to determine that in the absence of a second lens there would be an image $40$ cm to the right of the first lens.
According to this online solution we must take the image of the first lens to be the object of the second lens and use the above equation again to get choice (a).
How is that allowed? Doesn't the second lens totally interfere with/ obstruct the formation of the first image, making this a complicated problem?
| It works because the thin lens equation works for virtual images (i.e. $s'<0$) and virtual objects (i.e. $s<0$). To see why that is true, lets derive the thin lens equation with the following two assumptions:
*
*A light ray passing through the center of a lens does not bend.
*a light ray entering normal to the lens bends and passes through the focal point.
The rays originate from a point on the object, pass through the lens, then converge at a point that generates an image, like shown for a convex lens:
The similar triangles with angle $\alpha$ have legs that are in proportion:
$$ \frac{y_o + y_i}{s + s'} = \frac {y_o}{s} $$
And the similar triangles with angle $\beta$ have legs that are in proportion:
$$ \frac{y_o + y_i}{s'} = \frac{y_o}{f} $$
Dividing these two equations and rearranging gives the thin lens equation:
$$ { 1 \over s } + {1 \over s'} = {1 \over f} $$
Note that $s,s',y_o,f$ are all positive while $y_i$ is negative in this example. What would happen if we made $s$ negative? That would amount to placing $s$ on the same side as $s'$:
The object at $s$ is called virtual because it is located where the light rays would converge to if the lens wasn't there. The above rules for (1) and (2) rays still apply however, so we can generate a real image. And so long as you treat $s$ as negative (i.e. let $s - s' = -(s+s')$ ) you will get the thin lens equation by analyzing the two triangles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/395805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does Hyugens principle apply in three dimensions? Does Hyugens principle apply in three dimensions ?
If a surface wave (for simplicity an ocean wave) is propagating along the x axis we know that this wave ray is a point source for wavelets on the y axis but what about the z axis.
If this diagram was 3 d would we see a spherical wave front expanding from each point
http://physics.ucdavis.edu/Classes/Physics9B_Animations/ReflRefr.html
| Yes, absolutely, in general. The Huygens' principle is an intuitive picture of the solution of the Helmholtz equation through superposition of Green's functions. The basic solution is $E(\mathbf{r})=\frac{\exp(i\,k\,|\mathbf{r}|)}{|\mathbf{r}|}$ and you're simply building solutions out of sums of this one ("sums" in the broad sense of "linear combination" that includes integrals).
The building of solutions to quantum field theory problems using basic solutions called "propagators" is also often referred to as "Huygens' principle". It's the same basic idea.
The exact Green's function depends on the dimensionality of your problem and also the boundary conditions. However, Huygens's principle is the approximation that for most boundary conditions an approximate solution can be built by assuming sources of waves of the form $\frac{\exp(i\,k\,|(\mathbf{r}-\mathbf{r}_0|)}{|\mathbf{r}-\mathbf{r}_0|}$ with centers $\mathbf{r}_0$ on the "primary" wavefront. Moreover, the Green function changes for two dimensional problems. If we have a two dimensional problem wherein there is only variation in $x$ and $y$, the Green's function is no longer $E(\mathbf{r})=\frac{\exp(i\,k\,|\mathbf{r}|)}{|\mathbf{r}|}$ but rather one expressed through the Hankel function:
$$E(\mathbf{r})\propto H_0^\pm(k\,|\mathbf{r}|)\sim \sqrt{\frac{2}{\pi\,|\mathbf{r}|}}\,\exp(\pm i\,k\,|\mathbf{r}|)\;\text{as}\;k\,|\mathbf{r}|\to\infty$$
the asymptotic expression becoming pretty accurate for $k\,|\mathbf{r}|$ greater than about 10.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/395931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solar Sail and GR In Newtonian gravity a theoretical reflective sail could be made such that gravity pulling it down toward the star is compensated by 'light pressure' coming from the same star.
In both cases the force drop like $\frac{1}{r^2}$
therefore the forces will compensate at any distance.
If we introduce GR we have both a correction to the laws of gravity, and a redshift that changes the outward force.
So my question Is, at the limit $ \frac{r}{r_s}\rightarrow 0 $ Would the sail be attracted or repelled by the star?
| First, if you have an extremely large sail near a very powerful star, the 'photon pressure' on the sail can be greater than the pull due to gravity. Even though both forces decrease as $r^{-2}$ this doesn't necessarily mean that the two forces will always cancel, one of them can be always greater than the other (in the case of a 'light sail' this depends on the size of the sail).
In the $r \to \infty$ limit, far away from the star, you recover normal Newtonian gravity and the redshift becomes negligible. It is interesting to include the effects of redshift and GR on the efficiency of the solar sail - as you point out - but these effects only become very relevant when you would be close to the star. As you move further away from the star, these relativistic corrections become negligible.
That being said, let's consider the case including the relativistic effects: for a particle moving in a Schwarzschild spacetime
$$
c^{2}d\tau^{2} = c^2 \left( 1-\frac{\text{r}_s}{r} \right) dt^{2} - \left( 1-\frac{\text{r}_s}{r} \right)^{-1} dr^{2} - r^{2} d\theta^{2} - r^{2} \sin^{2} \theta d\phi^{2}
$$
The equation of motion for a free massive particle moving along a fixed radial trajectory (ignoring tangential motion along the $\theta$ and $\phi$ directions) in this Schwarzschild spacetime is then given by
$$
0 = \frac{d^2 r}{d\tau^2} -\frac{\text{r}_s}{2 r^2-2 r \text{r}_s} \left( \frac{dr}{d\tau} \right)^2 + \frac{c^2 \text{r}_s (r-\text{r}_s)}{2 r^3} \left( \frac{dt}{d\tau} \right)^2
$$
Now, if we have an ideal solar sail (with 100% reflection) that has a surface area that covers $A$ steradians at our star’s surface (located at $\text{r}_*$. Let us neglect the internal processes in the star; say that we know it’s surface luminosity $L_*$ (the amount of energy it emits at its surface). When the solar sail is located at $\text{r}_*$ (close to the surface of the star) it will receive a momentum $p = \frac{L_* A}{4 \pi r_*^2 c}$.
As we get further away from the star the energy the solar sail will receive decreases due to relativistic redshift, giving us that at a radius $r$ the effective luminosity is given by
$$
L(r) = \frac{L_* }{\sqrt{1-\frac{\text{r}_s}{r} }}
$$
The momentum our solar sail then picks up at this distance is
$$
p(r) = \frac{L(r) A}{4 \pi r^2 c} = \frac{L_* A}{4\pi r^2 c \sqrt{1-\frac{\text{r}_s}{r} }}
$$
We can now include the momentum that our solar sail receives as a function of $r$ in the geodesic equation that we wrote down earlier, where we now include the force acting on our solar sail on the left hand side
$$ f^i = m \left( \frac{d^2 x^i}{dt^2} + \Gamma^i_{jk} \frac{dx^j}{dt}\frac{dx^k}{dt} \right)$$
Where now
$$ f^i = \frac{dp}{d\tau} = \frac{dr}{d\tau} \frac{d}{dr} p(r) = \frac{A L_* (3 \text{r}_s-4 r)}{8 \pi c r^3 (r-\text{r}_s) \sqrt{1-\frac{\text{r}_s}{r}}} \frac{dr}{d\tau} $$
The complete equation of motion for our solar sail now becomes
$$
\frac{A L_* (3 \text{r}_s-4 r)}{8 \pi c r^3 (r-\text{r}_s) \sqrt{1-\frac{\text{r}_s}{r}}} \frac{dr}{d\tau} = \frac{d^2 r}{d\tau^2} -\frac{\text{r}_s}{2 r^2-2 r \text{r}_s} \left( \frac{dr}{d\tau} \right)^2 + \frac{c^2 \text{r}_s (r-\text{r}_s)}{2 r^3} \left( \frac{dt}{d\tau} \right)^2
$$
Had we not included the relativistic effects, the differential equation would instead have been of the form
$$ \frac{A L_*}{2 c \pi r^3} = \frac{d^2 r}{d\tau^2}$$
showing that the increase in acceleration would decrease as $r^{-3}$, which is indeed what the above geodesic equation approaches in the limit $r \to \infty$.
The redshift effects cause our solar sail to 'speed up' much faster than $r^{-3}$ when we get closer to the star (as the sail now get's an additional boost from what it sees as blueshifted photons closer to the star).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/396296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Boson or Fermion How do you deduce that an atom is a fermion or a boson? Do you determine it from the number of neutrons because "electrons and protons cancel out each other in a neutral atom"? What does this have to do with spin? Somebody please help.I am really confused here.
| It has to do with the overall spin. Bosons have integer spin $(0, 1, 2, \dots)$ and fermions have half-integer spin $(n+\tfrac{1}{2})$.
They can be either elementary or composite. Fundamental fermions that we discovered so far are the quarks and the leptons of the Standard Model. Fundamental bosons that we discovered so far are the gauge bosons (gluons, photon, $W^{\pm}$, $Z^{0}$) and the Higgs boson.
Composite particles like baryons are fermions because they are made of three quarks. Mesons are bosons because they are made of two quarks. Protons and neutrons are baryons and therefore are fermions. A nucleus composed of an odd number of nucleons is a fermion, and if it is composed of an even number of nucleons it is a boson. For example, Helium-3 is a fermion and Helium-4 is a boson.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/396603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is there an "invariant" quantity for the classical Lagrangian? $$
L = \sum _ { i = 1 } ^ { N } \frac { 1 } { 2 } m _ { i } \left| \dot { \vec { x } _ { i } } \right| ^ { 2 } - \sum _ { i < j } V \left( \vec { x } _ { i } - \vec { x } _ { j } \right)
$$
This is just a typical classical Lagrangian for $N$ particles. Since the Lagrangian does not explicitly depend on time, the energy must be conserved. Also, the linear and angular momentum seem to be conserved too.
However, if there is a change in the coordinate by the Galilean transformation $\overrightarrow{x}_i(t) \to \overrightarrow{x}_i(t)
+\overrightarrow{v}t$, then the aforementioned quantities seeem clearly "variant". So, my question is that whether there exists a quantity that is invariant under this Galilean transformation. Could anyone please present me one? Or if there is no such quantity, could anyone please explain why?
| In general, there is no reason to expect that there exist conserved quantites for a symmetry which is not a symmetry of the action, but merely of the equations of motion.
The case of the non-relativistic Lagrangian and Galilean transformations is a special case. As Qmechanic works out in this answer, the Galilean transformations are quasi-symmetries of the Lagrangian, i.e. only change it by a total time derivative. In this case, Noether's theorem still applies and yields a conserved quantity (for the free Lagrangian)
$$ Q = m(\dot{x}t - x),$$
which is Galilean invariant. Note that Qmechanic's third example shows that a symmetry of the equation of motion does not always imply a quasi-symmetry of the Lagrangian, and therefore there is no conserved quantity associated to it in the general case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/396881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Euler-Lagrange Equation Proving Maxwell Equation When quantizing the EM Field, we get the Lagrangian density,
$$L=\frac{1}{2}\left(\epsilon \vert E\vert ^2 - \frac{1}{\mu}\vert B\vert^2\right) = \frac{\epsilon}{2}\vert\nabla\phi + \dot{\textbf{A}}\vert^2 - \frac{1}{2\mu}\vert\nabla\times\textbf{A}\vert^2$$
My professor said that the first Maxwell equation, $\nabla \cdot E = 0$, is proved by the Euler-Lagrange equation for the above $L$ w.r.t. $\phi$. I.e.
$$\frac{\partial}{\partial t}\left(\frac{\partial L}{\partial \dot{\phi}}\right) + \sum\limits_{i=1}^{3}\frac{\partial}{\partial x_i}\left(\frac{\partial L}{\partial(\partial\phi / \partial x_i)}\right) - \frac{\partial L}{\partial \phi} = 0 \implies \nabla\cdot E = \nabla^2\phi = 0$$
I don't get exactly that result. I assume the first and last term are 0, since no phi or phi-dot appears in $L$. Using $\phi_x$ as the derivative, I get for the middle term (i=1),
$$\begin{align}
\frac{\partial L}{\partial(\phi_x)} &= 2(\nabla\phi + \dot{\textbf{A}})\cdot\left[\frac{\partial}{\partial\phi_x} (\nabla\phi + \dot{\textbf{A}})\right] \\
&= 2\left(\langle\phi_x + \dot{A}_x,\phi_y + \dot{A}_y,\phi_z + \dot{A}_z\rangle\right)\cdot\left[\frac{\partial}{\partial(\phi_x)}\langle\phi_x + \dot{A}_x, \phi_y + \dot{A}_y, \phi_z + \dot{A}_z\rangle\right] \\
&= 2\left(\langle\phi_x + \dot{A}_x,\phi_y + \dot{A}_y,\phi_z + \dot{A}_z\rangle\right)\cdot\langle 1,0,0\rangle\\
&= 2(\phi_x + \dot{A}_x)
\end{align}
$$
Therefore,
$$\frac{\partial}{\partial x}\frac{\partial L}{\partial(\phi_x)} = 2(\phi_{xx} + \dot{A}_{xx})$$
And then summing i=1 to 3 gives
$$2(\nabla^2\phi + \nabla^2 \dot{A}) = 0$$
So in order to prove the Maxwell equation, I need to show that $\nabla^2\dot{A} = 0$. How do I proceed to do that?
| HINT :
Note that
\begin{equation}
\sum\limits_{i=1}^{3}\frac{\partial}{\partial x_i}\left[\frac{\partial L}{\partial(\partial\phi / \partial x_i)}\right]=\mathrm{div}\left[\frac{\partial L}{\partial(\mathrm{grad}\phi)}\right]=\boldsymbol{\nabla}\boldsymbol{\cdot}\left[\frac{\partial L}{\partial(\boldsymbol{\nabla}\phi)}\right]
\tag{01}
\end{equation}
and
\begin{equation}
\frac{\partial L}{\partial(\boldsymbol{\nabla}\phi)}=\text{??? vector}
\tag{02}
\end{equation}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/397096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the energy of quantum harmonic oscillator independent of its amplitude? The energy of a harmonic oscillator with amplitude $A$, frequency $\omega$, and mass $m$ is
$$E=\frac 12 m \omega^2A^2 \, .$$
It is intuitive to think that the energy depends on the amplitude because more the amplitude means that the oscillator has more energy, and similarly if the angular frequency is high even then the energy will be more.
Now let's consider a quantum harmonic oscillator (QHO).
The energy is
$$E=\left( n+ \frac 12 \right ) h\nu \, .$$
No amplitude term is there! This is odd because, even if you argue that we are dealing in microscopic domain, we all can agree to the fact that, in general for any mass oscillating under some force, if we have more energy then the oscillator will move farther from its mean position and therefore will have more amplitude.
The relation of energy of QHO can't be wrong, where else the above conception of energy for an oscillator also doesn't seems to be wrong.
| You can calculate the variance of the position coordinate, $\sigma_x^2$, for a general eigenstate of the energy $\psi_n$ to be
$$\sigma_x^2=\frac{\hbar}{m\omega}\left(n+\frac{1}{2}\right) \, .$$
We can replace the $n$ dependance by energy dependence using the relation
$$E_n = \hbar\omega\left( n+\frac{1}{2} \right)$$
and we get
$$\sigma_x^2=\frac{\hbar}{m\omega}\frac{E_n}{\hbar\omega} \, .$$
Rearranging we get
$$E_n=m\omega^2\sigma_x^2 \, .$$
Remembering that for the classical case $$\sigma_x^2=\frac{1}{2}A^2$$ we retrieve the original relation in the quantum case as well.
Check the derivation at this link
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/397474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Bogoliubov Transformation with Complex Hamiltonian Consider the following Hamiltonian:
$$H=\sum_k \begin{pmatrix}a_k^\dagger & b_k \end{pmatrix}
\begin{pmatrix}\omega_0 & \Omega f_k \\ \Omega f_k^* & \pm \omega_0\end{pmatrix} \begin{pmatrix}a_k \\\ b_k^\dagger\end{pmatrix}\tag{1}$$
for bosonic operators ($+$) or fermionic operators ($-$). The standard way to do Bogoliubov transformations is to use the transformations:
$$M_{\text{boson}}=\begin{pmatrix} \cosh(\theta) & \sinh(\theta)\\ \sinh(\theta)&\cosh(\theta)\end{pmatrix},\quad M_{\text{fermion}}=\begin{pmatrix} \cos(\theta) & \sin(\theta)\\ -\sin(\theta)&\cos(\theta)\end{pmatrix}$$
However, in this case these won't work as they will give complex values of $\theta$, and to ensure that our (anti-)commutators remain intact we need $\theta$ to be real.
Thus my question is: How do we generalize the Bogoliubov to solve problems of the form of (1)?
This question is based of this one: Bogoliubov transformation with a slight twist
| There are two methods to tackling this problem:
*
*As pointed out in Yen-Ta Huang's answer and also in this Everett You's answer (EY16) to this related question we can split the creation and annihilation operators into a real and an imaginary part.
*As hinted at in (Capri, 2002; pg448) we can generalize the Bogoliubov transform to work with complex Hamiltonians.
Here I will do a simple example with the following fermionic Hamiltonian:
$$H=\varepsilon c_1^\dagger c_1+\varepsilon c_2^\dagger c_2+\lambda i(c_1^\dagger c_2^\dagger-c_2c_1)\tag{1}$$
Method 1
We let:
$$c_j=a_j+i b_j\quad \text{for}\quad j=1,2 \tag{2}$$
where $a_j^\dagger=a_j$ and $b_j^\dagger=b_j$. As shown in EY16 for $a_j$ and $b_j$ we have the following commutation relations
$$\{a_j,a_j\}=\{b_j,b_j\}=1$$
$$\{a_1,a_2\}=\{b_1,b_2\}=\{a_i,b_j\}=0$$
Thus subbing (2) into (1) we get that (after some algebra):
$$H=2i (\varepsilon a_1b_1+\varepsilon a_2 b_2+\lambda a_1 a_2-\lambda b_1 b_2)$$
$$=2i\begin{pmatrix} a_1 &b_2 \end{pmatrix}\begin{pmatrix} \varepsilon & \lambda \\ \lambda &\varepsilon \end{pmatrix}\begin{pmatrix} b_1 \\ a_2\end{pmatrix}$$
As explained in EY16 a Bogoliubov transformation of $a_j$ and $b_j$ is an orthogonal transformation in the case of fermions. Thus if we let:
$$\begin{pmatrix} b_1 \\ a_2\end{pmatrix}=\begin{pmatrix} \cos(\theta) & \sin(\theta)\\ -\sin(\theta) & \cos(\theta)\end{pmatrix} \begin{pmatrix} e_1 \\ d_2\end{pmatrix}$$
$$\begin{pmatrix} a_1 \\ b_2\end{pmatrix}=\begin{pmatrix} \cos(\theta) & \sin(\theta)\\ -\sin(\theta) & \cos(\theta)\end{pmatrix} \begin{pmatrix} d_1 \\ e_2\end{pmatrix}$$
with the new fermionic creation and annihilation operators being given by $f_j=d_j+ie_j$ with an appropriate choice of $\theta$ this will diagonalize the Hamiltonian
Method 2
In method 2 we simply generalize the Bogoliubov transformation. Consider the transformation:
$$f_j=u_jc_j+v_j c_j^\dagger$$
we are needing to enforce the conditions that:
$$\{f_i,f_j\}=0, \quad \{ f_i,f_j^\dagger\}=\delta_{ij}$$
If we do this we get that we need:
$$u_1v_2+u_2v_1=0\tag{3}$$
and
$$|u_j|^2+|v_j|^2=1\tag{4}$$
(4) implies that we have:
$$u_j=\cos(\theta_j) e^{i\phi_j^u}\quad v_j=\sin(\theta_j) e^{i\phi_j^v}$$
whilst with these (3) implies that:
$$\cos(\theta_1)\sin(\theta_2)=-\cos(\theta_2) \sin(\theta_1),\quad \phi_1^u+\phi_2^v=\phi_2^u+\phi_1^v$$
Putting these together the general Bololiubov transformation of fermionic operators is:
$$e^{i\tilde \phi_1} \begin{pmatrix}e^{i\tilde \phi_2} \cos(\theta_p) & e^{i\tilde \phi_3}\sin(\theta_p)\\ -e^{-i\tilde \phi_3}\sin(\theta_p) & e^{-i\tilde \phi_2}\cos(\theta_p) \end{pmatrix}$$
The standard method of the Bololiubov transformation can then be followed with this.
For reference the general Bololiubov transformation for bosons is (according to my calcuations:
$$e^{i\tilde \phi_1} \begin{pmatrix}e^{i\tilde \phi_2} \cosh(\theta_p) & e^{i\tilde \phi_3}\sinh(\theta_p)\\ e^{-i\tilde \phi_3}\sinh(\theta_p) & e^{-i\tilde \phi_2}\cosh(\theta_p) \end{pmatrix}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/397615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How fast does an electron jump between orbitals? I'm wondering what speed electrons jump from level to level. I've been told only that they emit light when doing so and need energy to be inputed in order to occupy orbitals closer to the nucleus.
I will explain the reasoning for asking this question after I understand the logic behind the answer.
| If you look at the spectral lines emitted by transiting electrons from one energy level to another, you will see that the lines have a width . This width in principle should be intrinsic and calculable if all the possible potentials that would influence it can be included in the solution of the quantum mechanical state.
Experimentally the energy width can be transformed to a time interval using the Heisneberg Uncertainty of
$ΔΕΔt> h/2π$
So an order of magnitude for the time taken for the transition can be estimated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/397844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Curl and divergence I am trying to understand curl and divergences in a more intuitive manner, especially the curl. And is curl a surface phenomenon, if yes then how?
| A discussion on the intuitive interpretation of the curl from math SE. And a quote from Wikipedia:
If the vector field represents the flow velocity of a moving fluid, then the curl is the circulation density of the fluid.
For divergence, I'd also point you to Wikipedia:
More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
I know, it's a lot of quotes and links, but there's nothing new under the sun ;)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why does the Hamiltonian define symmetry/invariance? In Sakurai's Modern Quantum Mechanics, in Chapter 4, he effectively states that the operation of rotation or translation, represented by a unitary operator $U$, is customarily called a symmetry operator regardless of whether the physical system itself possesses the symmetry corresponding to $U$. It's a symmetry or invariance of the system only when $U^\dagger H U=H$.
Why are symmetries defined with respect to invariance of the Hamiltonian?
| I interpret your question as "why would one want to call this the definition of symmetry?"
My imagination of symmetry: different "view" of the states such that the physics looks the same. Specifically, the states evolve in the same way.
In quantum mechanics, the unique role played by Hamiltonian is that it's the operator of time evolution: $e^{-i H t}$. Different views correspond to the change of states, e.g., rotation every state by an angle spatially. To guarantee the states of the new view is equivalent to the original states, at least the inner products between themselves should be the same. Thus, it's realized by unitary operator $U$: $|\psi \rangle \rightarrow U |\psi \rangle$. To make sure the new states in the new "view" evolve in the same way, the evolution of a transformed state $ e^{-i H t} U |\psi \rangle$ should same as the transformed evolved state $U e^{-i H t} |\psi \rangle$ (excluding time-reversal symmetry).
Because the property should be true for every state, $e^{-i H t} =U^{-1} e^{-i H t} U$. Because it should also be true for any length of time evolution, $H=U^{-1} H U$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Is it possible to harvest the energy from the movements of a satellite in orbit? I was thinking about how energy is harvested on Earth from movements of certain forces like wind and ocean currents. Could similar principles be applied in space?
Satellites are virtually in perpetual motion when orbiting the Earth. Is there kinetic energy that can be extracted from this orbital motion and harvested for use on Earth?
| What knzhou and nicael said is absolutely true, you cannot extract from the satellite itself more energy of that you have put in it when you launch it in orbit.
Maybe you're interested in knowing how that energy could be extract, and I think that a space tether could be a proposal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Is specific radioactive activity constant? We have been learning a lot on the topic, and my professor introduced a couple of formulas that can help me evaluate specific activity:
$$a = \frac{A}{m};$$
$$a=\frac{\lambda}{m}N_0e^{-\lambda t}$$
Knowing this, it is obvious that $a$ is not constant and that it changes exponentially, just like radioactive activity $A$. However, this formula is in use as well:
$$a = \frac{\lambda N_A}{M};$$ With $M$ being the molar mass of an element
The formula can be derived by setting: $m = \frac{N}{N_A}M$
What seems illogical to me is the fact that both $\lambda$ and $M$ are constant values, which means that $a$ is constant as well... I wasn't able to find an answer online and this isn't really talked about so I am probably wrong. Of course, I would very much appreciate and explanation. Thanks in advance!
| I think it is easier to understand if you write the variables as functions of time. Activity is defined as
$$A(t)=\lambda N(t) =\lambda \frac{m(t) N_a}{M}$$
where $N(t)$ is the number density as a function of time, $\lambda$ is the decay constant, $m(t)$ is the mass as a function of time, $N_a$ is Avogodro's number, and $M$ is the molar mass.
Specific activity is defined as:
$$a=\frac{A(t)}{m(t)}$$
Since both $A(t)$ and $m(t)$ decay at the same rate, $a$ is constant in time.
It might be easier to visualize if you write the equation using the solution for a single radioactive isotope with a constant value and no source
$$a=\frac{A(t)}{m(t)}=\frac{A_0 e^{\lambda t}}{m_0 e^{\lambda t}}$$
Note, however, that the same results still holds up if the time dependence is more complicated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If momentum and kinetic energy are related, how loss in energy doesn't cause loss in momentum? Kinetic energy and momentum are related to each other by the following equation:
$$K.E.=\frac{1}{2}\frac{\textbf{P}^2}{m} $$
In inelastic collisions the momentum is conserved but the energy isn't. How can this be correct in the view of previous equation?
Moreover, if I want to rewrite the previous equation in term of change of momentum and change of kinetic energy, is the following true or not?
$$\Delta K.E.=\frac{1}{2}\frac{(\Delta\textbf{P})^2}{m} $$
If that is wrong, what is the true form?
| The momentum of one system is conserved if no force acts on it.
Your problem is that you look at one system alone, and when force acts on it, you think that momentum isn't conserved.
If for example a clay ball A of mass $m$ and velocity $v$ collides with an identical ball B moving at opposite direction. They both stop by symmetry.
If you look at system A alone, it's KE went from $\frac{mv^2}{2}$ to $0$, and so did its momentum. But that's to be expected since an outside force acts on it.
Look at systems A and B together. Momentum is vector, so the total momentum of the systems A, B together is 0. since KE is scalar, their total $KE = mv^2$. When the 2 balls collide, the momentum is unchanged and the KE decreases to 0.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If I have an electrical point dipole inside a grounded sphericall shell, what is the electric potential outside of the sphere? In particular, I am confused about how the distribution of charge will take place, and how it will affect the outside.
It seems to me that charges induced by the dipole (positive and negative on both extremes of the sphere) will also produce an electrical field on the outside. Is this correct even for a grounded sphere?
| Starting with an ungrounded spherical shell, we can determine the electrical field outside the shell using Gauss's law:
The net electric flux through any hypothetical closed surface is equal
to 1 / ε times the net electric charge within that closed surface.
Source: Wikipedia.
Since the dipole, as a whole, is neutral, we can state that the total flux of the field coming out of the sphere is going to be zero. Moreover, in the absence of other charges in the vicinity, we can state that the field everywhere around the sphere and therefore the potential on the sphere will be zero.
This is because the direction of the field on the surface of the sphere would determine the sign of its potential, but, since the potential has to be the same everywhere on the sphere, the sign of the field would have to be the same as well. But if the sign of the field is the same everywhere and the total flux of the field through the sphere is zero, we have to conclude that the field everywhere has to be zero.
This state is achieved by the redistribution of the charges on the inside surface of the sphere in such a way that they cancel the effect of the dipole charges.
Since the potential on the sphere is zero to start with, grounding it would not change anything. If the net charge inside the sphere was not zero to start with and the sphere had some initial potential and field, grounding it would make its potential zero and kill the field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/398842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
On the Robertson uncertainty relation when $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$ The Robertson uncertainty relation is
$\sigma^2_A \sigma^2_B \geq \big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 + \big| \dfrac{1}{2i} \langle [A,B] \rangle \big|^2.$
Where $\sigma^2_X$ is the variance of the operator $X$ and $\{A,B\}$, $[A,B]$ are the anti-commutator and the commutator of the Hermitian operators $A$ and $B$, respectively.
The uncertainty relation is more commom presented in the form
$\sigma^2_A \sigma^2_B \geq \big| \dfrac{1}{2i} \langle [A,B] \rangle \big|^2.$
Where there are commom physical examples which have that satisfied, e.g. $[x,p] \geq \dfrac{\hbar}{2}$, but these examples have $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2=0$.
I am trying to find a quantum system where the term $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$, so that the lowest limit of the product of the variances of $A$ and $B$ have a dependence on the latter. So, to answer my question, it is necessary to give a possible physical system where $\big|\dfrac{1}{2} \langle\{A,B\}\rangle - \langle A \rangle \langle B \rangle \big|^2 \neq 0$ for $A$ and $B$ Hermitian.
Any help or ideas are welcome.
| You might profit from calculating a few expectation values for, e.g. the oscillator, for which $a^\dagger |n\rangle= \sqrt{n+1} | n+1\rangle$.
Take $A=a$ and $B=a^\dagger$, so that
$$
[a,a^\dagger ]=1, \qquad \{ a,a^\dagger \} =a a^\dagger + a^\dagger a=1+2N.
$$
Look at the first excited state, $|1\rangle$, so $a|1\rangle=|0\rangle$, so your expectation values are
$$
\langle 1| a|1\rangle=\langle 1|a^\dagger| 1\rangle=0,\\
\langle 1| \{ a,a^\dagger \} |1\rangle= 3,\\
\langle 1| [a,a^\dagger ]|1\rangle= 1,
$$
so that, for this state
$$
\sigma_a^2 \sigma^2_{a^\dagger}\geq 9/4 +1/4= 10/4.
$$
The anticommutator, of course, is not a constant, unlike the commutator, nor should you expect it to be.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/399197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
According to Conservation of Momentum, a gun in a sealed box should not have recoil? According to the law of Conservation of Momentum, there is no way to increase the momentum of a system, except by momentum transfer from interactions with the external. If I fire a rifle while sitting on a go kart, the go kart is going to go backwards but the bullet goes forwards, conserving the momentum.
Now lets say I construct a long 1 inch thick steel box (a few meters long), and I position the gun's butt against the back of it, and fire the gun electronically. Would we not get the box flying backwards still (at least until the bullet gets lodged in the front of the box? Even if the bullet burying in the metal at the end of the box causes another force in the box at the opposite direction of the initial kick, haven't we momentarily broken the conservation of momentum?
| If the gun is somehow not anchored to the inside of the box when it is fired (say the hook that holds it releases it at the right instant), the gun and bullet will travel in opposite directions with opposite momenta. They may strike the box at different times, but the momentum of gun+box+bullet will always be zero. After both the gun and bullet have hit the sides of the box, and everything has come to rest, the center of mass of gun+box+bullet will be where it was in the beginning.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/399642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
How is the centripetal force of a car when turning distributed over the wheels?
The centripetal force can easily be calculated as: $F = (M*v^2)/R = (M*v^2)*sin(\delta)/L$. But how is this force distributed over the (front and rear) wheels? My initial thought was to just divide it by 4 for each wheel, but when you turn your front wheels 90 degrees, there will be no force over the rear wheels. So when simply dividing by 4 is wrong, then how is the distribution in reality?
Is it also safe to assume the forces on the front wheels are equal to each other, and also the same for the rear wheels?
| Try considering this
Since the car is driven by one engine let us assume all 4 wheels have same velocities at all time for simplicity.
And the weight of car is divided quite uniformly over the 4 wheels as well
Sonnow that we have made the m•(v^2) part of the equation same for all the wheels let us move on to the Raius of curvature part
And as for Radius os curvature it will be different for the wheels on the inner and on the outer (in case of the picture provided considering the car is turning clockwise the right and the left pairs of wheels respectively)
For the wheels on left "R" will be less thus they will get more of the centripetal force and will require more friction
For the wheels on right "R" will be more thus they will get less of the centripetal force and will require less friction
Thus i think the friction force should be divided not on the basis of front and back but on the basis of inner and outer
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/399761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Preferred fluid flow As I’ve read in the book “Fluid Dynamics” by Yunus Cengel, The Pressure Drag decreases and the Skin Friction Drag increases when fluid flow over body transitions from laminar to turbulent thus, resulting in overall decrease in Drag Coefficient.
The Pressure Drag is reduced during the transition implying that the Normal Pressure Force on the body is reduced and as the Lift Force is mostly provided by the Normal Pressure Force thus, implying Decrement in Lift with transition from Laminar to Turbulent flow.
So which flow is preferred in case of aeroplanes?
| Laminar flow is clearly preferred. But a turbulent one has its uses, too
Not only will a laminar boundary layer result in much less friction drag (the velocity gradient at the wall is much less steep than with turbulent flow), but for the same reason it will extract much less energy from the flow so its ability to endure the pressure rise later is better preserved. However, once the flow encounters this pressure rise, the boundary layer becomes unstable and will transition into a turbulent one if that had not already happened before. Because the energy transfer from the far wall to the near-wall layers is much greater in a turbulent boundary layer, it allows to sustain much steeper pressure rises, allowing to reach a higher angle of attack without separation.
Much depends on the speed and physical size of the craft: Even though a negative pressure gradient is stabilizing the boundary layer, large and fast airplanes have rarely any laminar flow left while slow and small gliders sport it on most of their wing's lower and half of its upper side.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/400399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are random errors necessarily Gaussian? I have seen random errors being defined as those which average to 0 as the number of measurements goes to infinity, and that the error is equally likely to be positive or negative. This only requires a symmetric probability distribution about zero. However typing this question into Google, I did not find a single source that suggested random errors could be anything other than gaussian. Why must random errors be gaussian?
| There are many examples of physical phenomena that seem to be governed by non-Gaussian statistics. For instance, the Levy distribution arises in the multiple scattering of light in turbid media, where the photon path length follows this distribution.
I think any time you have rare, but important events, you will see non-Gaussian statistics, such as with the distribution of sunspots, the time between geomagnetic reversals, etc. The Gaussian is nice since it leads to relatively easy analytic calculations (in addition to the reasons already given). In dynamical systems the level spacings of energy are governed (universally) by Poisson statistics for the case of nonchaotic systems, vs. Wigner-type statistics for chaotic systems.
The whole field of Levy flights is huge. Especially in laser cooling. This book is superb: Lévy Statistics and Laser Cooling: How Rare Events Bring Atoms to Rest
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/400824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53",
"answer_count": 7,
"answer_id": 0
} |
Can there be general relativity without special relativity? Can General Relativity be correct if Special Relativity is incorrect?
| No, there can't be general theory of relativity without its special part. Why, what is relative in general theory, length and time interval. From where relative space and time came, from relative speed. But in general theory, there is no more need of relative motion but absolute motion and gravity is just frame of reference as different coordinate frames.
Why there is need of spacetime, because physicists thought for a long that what provide interaction between distant bodies. Space is not vacuum for general theory of relativity to stand. And big bang provide force necessary for motion and expansion is part of it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/400910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 8,
"answer_id": 7
} |
Is momentum perfectly conserved at the particle level given the Heisenberg uncertainty principle? Discussions of conservation of momentum frequently use the metaphor of two billiard balls colliding. My impression is that this is not valid at the quantum scale - an illustration of the particles' trajectories should show the outgoing vectors with some uncertainty. Perhaps the total energy could still be conserved if the two particles were entangled in such a way that the imprecision of one particle's trajectory was balanced by the second particle's trajectory. Even if that was the case, I am not clear that the net vector would be as expected, which would therefore mean the momentum was not conserved.
An alternate way of viewing the problem: at the moment when two particles collide, the position is known very precisely (since they had to hit each other at the same place and time). Since momentum is complementary to position, this means the momentum has maximum uncertainty at that instant. While the momentum in a single collision may be perfectly conserved, perhaps the momentum being conserved is somewhat probabilistic such that over billions of interactions of billions of molecules (as the original force propagates) the original net momentum is not conserved.
I tried to look for answers to this question and here are some relevant ones. They seem to conclude that uncertainty does apply to single particles.
Does the Heisenberg uncertainty principle apply to the free particle?
Uncertainty principle: for an individual particle?
My question is prompted in part by a "tongue in cheek" video which shows a propeller in a closed box appearing to cause movement. The box is flimsy and the experiment is not meant to be definitive but made me wonder.
| If you initially knew the incident momentums, the sum of momentums will be preserved, but its difference almost certainly (it depends in the kind of interaction, but with probability 1) will have some uncertainty, thing that after the collision didn't happen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/401232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Regarding absorption of light I have a question regarding absorption of light. When looking at the absorption spectra of for example "Chlorphyll a", two absorptions peak can be seen in the visable spectra of light ( one at around 425 nm and one at 680 nm). I have been told that if a photon has suffient energy to excite and electron to a higher energy state (LUMO state) the material has absorbed the photon. This makes me a bit confused since if "Cholorphyll A" can absorb ligh at 680 nm ( red colour) why can't it absorb light at every other wavelength that has a higher photon energy? Surely all the other photons at lower wavelength (higher energy ) than at 680 nm must have suffient energy to excite the electron to higher energy state if photons at 680 nm can do it.
Clearly I am missing something here.
Thank you in advance.
| In order for the excitation to occur there must be resonance. The energy of the absorbed photon must match the energy of the transition. In this case the transition is between bound states with well defined energies.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/401643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How can a body have two axis of rotation at the same time? I m not concerned with rotation of a body with two simultaneous axis but concerned with how we choose the axis,while going through pure rolling I have observed that there are two axis of rotation one is passing through the center of mass and the other is through the point in contact with the ground,my concern is how can there be any axis of rotation through the point of contact where as m very well finding the body does not rotate in that axis of rotation that is it very well rotates only through the center of mass.
| It isn't so. You can't have two axes at once. You can combine rotation along two axes to give a effective rotation along a single axis and the other way round.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/401773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Stern-Gerlach experiment with a magnetic field inbetween An experiment is set up so that a beam of spin-1/2 is prepared for $S_{z} = \hbar/2$, it then passes a constant magnetic field $\textbf{B} = B_{0}\textbf{e}_{x}$ with the velcity $v_{0}$ for a distance of $L$ before it passes an aditional Stern-Gerlach apparatus in which only beams in $S_{z} = -\hbar/2$ can pass.
I've made a quick sketch of the installation. Now I'm wondering if my thought process is correct.
We're searching for the percentage of the initial beam that passes through the last apparatus.
The first apparatus blocks 50% of the incoming beam. Inside the magnetic field, I get
$$
\textbf{H} = -\gamma B_{0}\textbf{S}_{x}
$$
Now through the Schrödinger equation I get
$$
i\hbar \frac{\partial \chi}{\partial t} = \textbf{H} \chi
$$
$$
\chi(t) = \begin{bmatrix}
a e^{i\gamma B_{0}t/2} \\
b e^{i\gamma B_{0}t/2} \\
\end{bmatrix}
$$
Intuitively $\chi(0) = \chi_{+}^{(z)}$ since that's what we get after we pass the first apparatus, but this becomes a problem since the probability of getting a spin down beam after the magnetic field becomes 0.
$$
\chi(t) =\begin{bmatrix}
e^{i\gamma B_{0}t/2} \\
0 \\
\end{bmatrix}
$$
$$
c_{-}^{(z)} = \chi_{-}^{(z)}\chi(L/v_{0}) =[0 \:\: 1]\begin{bmatrix}
e^{i\gamma B_{0}(L/v_{0})/2} \\
0 \\
\end{bmatrix}
= 0 \implies P = |c_{-}^{(z)}|^{2} = 0
$$
I'm quite certain that there are errors in my calculations since I'm unfamiliar with this field and would find it very helpful if you could point those out for me.
| As pointed out in another answer, your Hamiltonian is wrong. In the $z$-basis, the representation for the $x$-component of a spin $1/2$ can be written as:
$$
S_x = \frac{\hbar}{2}
\begin{bmatrix}
0 &1 \\
1 &0
\end{bmatrix}
$$
This leads to the following Schrödinger equation:
$$
\left\lbrace
\begin{matrix}
\dot{a} = i \frac{\gamma B_0 }{2} b \\
\dot{b} = i \frac{\gamma B_0}{2} a
\end{matrix}
\right. ,
$$
where $\chi(t) =
\begin{bmatrix}
a(t) \\
b(t)
\end{bmatrix}$.
By integrating this equation from $t=0$ to $T$, you should be able to answer what happens to the spin when it comes to the second Stern-Gerlach apparatus.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/402025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the relationship between the integrability of a quantum many-body system and thermalization? If a quantum many-body system is integrable, does it imply the system would always thermalized or many-body localized?
| *
*First of all, I only discuss closed quantum system here.
*Usually integrable systems do not contain disorders (but 1D Kondo model has impurity while being integrable), hence generally not many-body localised.
*Integrable systems do not thermalise in a conventional way (I mean it does not thermalise to a Gibbs ensemble). Be careful about the definition of thermalisation here. Because for any closed quantum system, the dynamics should be unitary, i.e. if one starts with a pure state, it will stay as a pure state. But "thermalisation" in this context means the expectation value of a local operator can be expressed as statistical expectation value of a Gibbs ensemble. (Tracing out the rest of the system, this is possible, similar to what happens to entanglement entropy.)
*Integrable systems will thermalise into a "generalised Gibbs ensemble" (GGE) due to the existence of (at least) extensive many local/quasi-local conserved charges. This is well understood for an integrable system relaxing after a quantum quench. See review such as 1604.03990. A complete description of the GGE in integrable systems is explained here:1603.00440.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/402394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How are real particles created? The textbooks about quantum field theory I have seen so far say
that all talk in popular science literature
about particles being created spontaneously out of
vacuum is wrong. Instead, according to QFT those virtual particles are
unobservable and are just
a mathematical picture of the perturbation expansion of the propagator.
What I have been wondering is, how did the real particles, which are observable, get created? How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair?
Can anybody explain this to me and point me to some textbooks
or articles elaborating on this question (no popular science, please)?
| *
*leptons (electron) and quarks, that build up matter, were created with pair creation.
*that means, that a matter-antimatter pair can be created out of vacuum (and can annihilate too into vacuum), this pair creation (and annihilation) is going on in every neutron and proton all the time, because neutrons and protons are made up of not just valence quarks, but a sea of quark-antiquark pairs, the net of that sea are the 3 valence quarks
*now why do we see more matter then antimatter? that is baryon asymmetry
*now you are saying that it is virtual particles, actually virtual particles (their mass is off shell) are a mathematical way to describe the forces (EM, strong, weak, gravity) that act between particles
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/402612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Nuclear Physics Modeling Software I have a nuclear reactor design I would like to model. I would like to show the individual atoms and how they interact with each other in the reactor (specifically, I would like to model decay modes, interactions with photons). I was wondering if there was any software which would help me model this in 3D.
This reactor design is one of my own, so I know all about what happens inside of the reactor. I have used Geant4 and similar software before, but I would just like a simple graphical interface in which I could model nuclear systems.
| A "general" reactor solver and GUI doesn't really exist. Especially one where you want to show individual particles. (I'm not sure how you would show individual particles, there are approximately $10^{10}$--$10^{20}$ particles in a reactor system.)
If you have a new reactor concept that you want to model, you would normally break it down into the following components: geometry, materials, cross sections, particle transport, depletion, thermal-hydraulic modeling and feedback. You would then apply applicable physics to each part. Different reactor concepts require different physics. For example, you wouldn't model a high-power molten salt reactor with the same tools you model a barely critical graphite pile. The physics are different.
If you can provide more information about your concept (specifically the geometry, power, coolant, and materials), we can try to give a better answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/402895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
If I pointed a laser directly at Sagittarius A* from Earth, how likely is it to reach the event horizon? Given the extreme low-density of space, is it likely to reach the event horizon without interference from other matter?
| Sagittarius A* is hidden behind dust clouds that block all visible light. The only reason that we can observe it is that we use infra-red wavelengths that can penetrate the dust clouds. So if you shone a visible laser at Sagittarius A* there is absolutely no chance of it reaching the event horizon.
On the other hand if you use an infra-red laser with a wavelength that can penetrate the dust the laser will almost certain reach Sagittarius A*, or at least reach its accretion disk. Stars may seem big when they're close to you, i.e. the Sun, but compared to average distances between stars they are effectively just points. The chance of your laser hitting a star and being blocked is very small.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/403001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Resistance and resistivity: which one is the intrinsic and which is the geometric property? Why? The electrical resistance $R$ and electrical resistivity $\rho$ of a metal wire are related by $$\rho=\frac{RA}{l}$$ where $l$ is the length and $A$ is the cross-sectional area of the wire. One could also have written $$R=\frac{\rho l}{A}.$$ From the first relation, it implies that resistivity is a geometric property of the conductor while the second relation implies that resistance is a geometric property. However, I know that resistance is a geometric property while resistivity is an intrinsic property. See here. But it's not clear to me why.
| Let me remark, that from microscopic point of view it is more common to talk about the conductance and conductivity, which are inverse to the resistance and resistivity. Thus, I might use below these terms interchangeably.
Resistivity is a property of a material
Within classical electrodynamics (i.e., when the averaging over a macroscopic volume is implied) resistivity is determined by the intrinsic factors, such as the properties of the material and temperature. Resistivity can be expressed in terms underlying physical processes, such as collisions of electrons with impurities, photons, electron-electron scattering, etcs. Drude formula famously expresses resistivity/conductivity in terms of the scattering time, resulting from all these processes.
As long as we can ignore the boundary effects (i.e., the material is macroscopic) none of these depends on the size of the conductor. The total current flowing through the conductor however depends on its geometric properties:
*
*the dependence on the cross-sectional area allows more current to pass through - the analogy with a wider pipe is nearly literal here
*the longer conductor means that the electrons experience more scattering events while travelling from one end to the other.
Resistance and conductance on microscopic scale
On microscopic scale, e.g., when dealing with nanostructures, one often cannot neglect the fact that the size of the conductor is comparable to the mean free path of electrons. In this case the simple formulas relating the resistance do not apply anymore, and one often has to resort to discussing lobal quantities such as conductance and resistance. The numerous associated effects are: ballistic conductance, Anderson localization, weak localization, quantum Hall effect, etc.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/403100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
What is the physical meaning of the third invariant of the strain deviatoric? In continuum mechanics of materials with zero volumetric change, the material condition can be expressed by the strain deviatoric tensor instead of the strain tensor itself. To express the plasticity of the materials, the plasticity surface is constructed from the second and third strain invariants, i.e.,
$I_2 = \sqrt{-\frac{1}{2}\text{tr}(\varepsilon_{dev}^2) }$,
$I_3 = \det(\varepsilon_{dev})$.
It is obvious that the second invariant is not able to describe the tension-compression asymmetry of the material. Therefore, the third invariant is also included in the plasticity surface. Now the question is why the third invariant can express the tension-compression asymmetry. I mean, how the determinant of the strain deviatoric determines the tensile or compressive state of the material.
Thanks in advance
| For a general $3\times 3$ matrix $\mathbf{A, you have:
$$I_3 = \frac{1}{3!}[\mbox{tr}(\mathbf{A})^3 -
3\mbox{tr}(\mathbf{A}^2)\mbox{tr}(\mathbf{A}) + 2\mbox{tr}(\mathbf{A}^3)]$$
If you have $\text{tr}(\mathbf{A}) = 0$ (this is the case for $\mathbf{A} = \boldsymbol{\varepsilon}_{dev}$), then you get:
$$I_3 = \frac{1}{3} \text{tr}(\mathbf{A}^3)$$
So, for deviatoric strain tensor, you have:
$$I_3(\varepsilon_{dev}) =: J_3 = \frac{1}{3} \text{tr}(\varepsilon_{dev}^3) = \det(\varepsilon_{dev})$$
Thus, you need this additional invariant to distinguish tensile stress from compressive stress, because $I_2(\varepsilon_{dev})$ does not change sign under the transformation $\varepsilon_{ij}\mapsto -\varepsilon_{ij}$, because:
$$I_2(\varepsilon_{dev}) =: J_2 = \frac{1}{2} \text{tr}(\varepsilon_{dev}^2)$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/403220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Does increasing tension on a string reduce or increase the harmonic wavelength for a standing wave? I had thought that increasing tension on a string increases the frequency and thus decreases the wavelength. My book says otherwise. Which is correct?
| Possibly you are being confused by the $c=f\lambda $ formula. This applies twice, once in the string (where $\lambda$ is fixed and the tension affects $c$) and once in air (where $c$ is fixed and $\lambda$ changes). The frequency is the same in both.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/403336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Physical processes taking place inside Germanium detectors Reading about differences between Silicon detectors and Germanium detectors, I decided to learn a bit more about the latter, since I've always used Silicon detectors in all the experiments I worked for. While reading about them, I found that Germanium detectors are usually calibrated using $^{60}$Co sources. These sources emit gammas in the ~MeV energy range, but I was wondering what is the process taking place in this measurement, is it photoelectric effect, Compton scattering or pair production? I ask this question because I do not have in mind the energy range of these processes and I know that, for example for gamma-ray satellites like Fermi, that uses a silicon tracker, the dominant effect is pair-production. On the other hand, I also know that gamma-ray satellites of lower energy like Comptel use the Compton effect. There are other gamma-ray satellites proposed as eAstrogam that use both effects. So I was wondering, at MeV energies, of those that I have mention, which one (or which ones) is negligible to measure the total energy of the $^{60}$Co emission?
| The gamma photons produce highly energetic electron in Ge which by ionization generate a large number of electron-hole pairs proportional to the energy of the electron in the depletion or intrinsic zone of the Ge pn-junction. This is similar to a ionization gas chamber detector. This charge generation leads to a corresponding current current pulse in the outer circuit for the detection of the radiation. All three named processes can produce energetic electrons in Ge. For gamma energy spectroscopy, the photoelectric effect is preferred because it generates electrons with the same energy as the absorbed photon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/403485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits