Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Is energy $E$ in Schrödinger equation an observable/ Can $E$ be measured? Take this quantum approach to estimate mean energy of a molecule:
$$\langle\psi|H|\psi\rangle=\overline E$$
Question:
Is $E$ an observable? How we can compare it to an experimental value? i.e how to experimentally measure it and what are the states involved (as energy is all about differences there must be two states)
Edit
It is not a question about how is theoretically defined an observable.
Any help?
| You can measure the energy of a molecule in a number of ways.
If what you want is to measure the energy difference between an excited state and the ground state, then you can drive the transition using e.m. waves of suitable frequency. You need a way to determine that the transition has happened, and a way to measure the frequency of the waves. To determine that the transition has happened, you could for example use further transitions in the molecule and ultimately look for fluorescence. This is the way really precise measurements are done in atomic and molecular physics labs. There are also clever techniques involving the motion in molecular beams.
If you want to measure the total energy including kinetic energy and internal energy, then you could drop the molecule into something cold, such as liquid helium. The amount of energy released (from kinetic energy and internal energy of the molecule) can be determined from the amount of helium that boils off. This method is not normally used, but it is the principle that counts. Instruments called bolometers do this kind of total energy measurement using a variety of ingenious strategies.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Do gravitational sources move along ‘geodesics’? Assume we have a system of say two bodies which are orbiting each other. Now assume that we wish to find an equation of the orbits of the two gravitational sources. Do they follow a ‘geodesical’ path, if we assume that the sources may or may not be singularities, which in this case may require a puncture method or so I’ve heard.
I have also read several articles which suggest that it does move ‘geodesically’ if one does not take into account the self-interaction of the space-time field. If we were to take into account the self-interaction, back-reaction and what nots, will it still move ‘geodesically’ in the metric of the entire manifold?
| First note that in the case of block holes it is not even clear what "following a geodesic of the entire manifold" would mean as the interior of the black hole exists outside of the causal past of the rest of the space time. Even for extended bodies (e.g. stars) it is not immediately clear what this means as you would first have to identify a worldline for the body. (In Newtonian gravity this would its center of mass, but a priori it is not clear what to take in GR.)
That being said, what can be shown (see https://arxiv.org/abs/1405.5077) is that for extended bodies there always exists an "effective metric" and an "effective worldline" such that the world line is that of a test particle with the same multipole moments as the extended body through the effective spacetime. In particular, a spherically symmetric extended body will follow a geodesic of the effective spacetime.
For black holes the same statement has been proven (see https://arxiv.org/abs/1506.06245) perturbatively in the mass ratio of the objects (to any order).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to find error in trigonometric ratios? How do I find the error in measurement of $\sin \theta$, if I am given the error in the measurement of the $\theta$
| In general, if you have a function $f$ of a single variable $x$, you can propagate the uncertainty in the following way:
$$ \delta f = \left|\frac{df}{dx}\right| \delta x$$
If you have a function $g$ of several variables $x$ and $y$ with uncorrelated uncertainties, then
$$\delta g = \sqrt{\left(\frac{\partial g}{\partial x}\cdot \delta x\right)^2 + \left(\frac{\partial g}{\partial y}\cdot \delta y\right)^2}$$
If your uncertainties are correlated ... you have more work to do ;)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Electric power and resistance dependance According to the equations,
$$P=VI =I^2R\,\text{ and voltage } V=IR$$
it seems clear that when the resistance is lower by fixing the voltage at constant, the current is therefore, higher, generating high power. But what confused me was when the resistance is higher by fixing the current at constant, the voltage is therefore, higher, which in turn lead to a higher power as well. Can anyone pull me out of this confusion?
| Notice that by fixing a constant voltage $V$, we have that the current $I$ is inversly proportional to the resistence by Ohm's Law:
$$I\propto\frac{1}{R}$$
So, your first assumption is right, when we set the resistence to be lower the current is higher.
When you set a constant current $I$, we have by Ohm's Law:$$V\propto\frac{1}{R}$$
Setting a lower resistance we produce a higher voltage.
So you're right we can have a higher power by setting a constant voltage and low resistance or by setting a constant current and low resistance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Doppler shift and speed of rotating objects in space I understand the concept of how we can use the doppler effect to know if an object is spinning, in the sense that the part of the object spinning towards us will exhibit a blueshift, and the part spinning away will exhibit a redshift.
However, how can we determine the rotation rate using doppler effect? My professor said to do so by "measuring the widths of spectral lines," but I would just like to know what I should be looking for. I would assume that the closer the spectral lines are together, the faster the object is spinning but I would like for somebody to either confirm that or help me understand what it actually means.
| If you consider a rotating body, some parts of the object will be moving towards you and some parts away. These additional velocity components will give you a different Doppler shift, and a different observed frequency of light. When considered the whole visible surface of the rotating object, there will be a continuum of different Doppler shifts, and therefore a wider spectral line.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Very basic question about quantum field operators For a matrix $A$, the notation $A^\dagger$ implies the transpose of the complex conjugate of $A$ i.e., $A^\dagger=(A^*)^T$.
What does the symbol $\hat{\phi}^\dagger$ mean for a quantum operator corresponding to a classical field $\phi(x)$? Is it okay to think of $\hat{\phi}(x)$ as an infinite dimensional column vector and $\hat{\phi}^\dagger$ as a row vector with $\hat{\phi}^\dagger=(\hat{\phi}^*)^T$?
However, there are two problems that I can immediately see.
1. Operators in ordinary quantum mechanics are square matrices while (if my representation is valid) $\hat{\phi},\hat{\phi}^\dagger$ are column and row vectors.
2. For a complex scalar field $$[\hat{\phi}(t,\textbf{x}),\hat{\phi}^\dagger(t,\textbf{y})]=0\implies \hat{\phi}(t,\textbf{x})\hat{\phi}^\dagger(t,\textbf{y})=\hat{\phi}^\dagger(t,\textbf{y})\hat{\phi}(t,\textbf{x}).$$ If my representation is valid, this equation becomes meaningless because on one side we have number and on the other side we have a matrix.
What is is the correct way to visualize quantum field and interpret the commutation relation?
| That for a matrix the dagger denotes the transpose conjugate is really just a special (namely the finite-dimensional) case of the general definition of the Hermitian adjoint:
For any operator $A$ on a Hilbert space $H$, the adjoint $A^\dagger$ is the operator such that
$$ \langle v, Aw\rangle = \langle A^\dagger v,w\rangle$$
for all $v,w$ in the domain of definition of $A$.
Since for any quantum field $\phi$, $\phi(x)$ is an operator (neglecting the case where we treat the field as an operator-valued distribution rather than a function, replace $\phi(x)$ with $\phi(f)$ for some test function $f$ in that case), there is no problem in applying this definition to a quantum field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to identify binary stars in $N$-body simulation? Binary stars constitute a significant portion of the stars of a globular cluster.
I would like to verify that this is true in my $N$-body simulation, but I don't know how to decide whether a star in the system is a binary.
Visually this is easy to do, as binaries are identified as two stars at very close distance orbiting about their center of mass, but I need a mathematical condition which I can then translate to code.
| You'd need to calculate the binding energy of pairs of particles in your simulation. If for a pair this energy is negative then the pair is bound forming a binary system.
I assume you already have an effective way of calculating the potential, so this should not add much more execution time, since you just need to check for points that are close enough
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to use a galaxy's redshift to measure its distance I know how we can use the spectrum emitted by a galaxy to measure whether it is redshifted/blueshifted, but out of curiosity, how can the redshift of a galaxy be used to determine its distance from us?
| This is the whole idea behind Hubble's law: more distant objects recede faster (higher redshift) than closer objects. So there's a direct correlation between distance $d$ and redshift $z$
$$
d = d(z)
$$
At small distances/redshifts
$$
z = H_0d/c
$$
So, measuring $z$ gives you a direct estimation of distance
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Work done by the piston versus work done by the surrounding Suppose a massless, frictionless piston assembly initially has a higher pressure than the external (atmospheric) pressure, and it is pinned so that the piston does not move. Once the pin is removed, the piston would expand until the pressure inside the piston becomes the atmospheric pressure. During the process, the work done by the gas inside the piston is
$$W_{\text{piston}}=\int_{V_1}^{V_2} P_{\text{gas}}\cdot \mathrm{d}V$$
and the work done by the surrounding is,
$$W_{\text{ext}}=\int_{V_1}^{V_2}P_{\text{ext}}\cdot \mathrm{d}V = P_{\text{ext}} \left(V_2 - V_1 \right)
\,.$$
We can pull out the external pressure from the integral because it is constant as an atmospheric pressure.
My question is, the work done by the piston is not the same with the work done by the surrounding because $\mathrm{d}V$ is the same, but ${P}_{\text{gas}}$ is greater than ${P}_{\text{ext}}$ during the process, so the work done by the piston is larger than that by the surrounding. Shouldn't they be the same?
| The work done varies because the piston will be accelerated at a higher rate.In case of contant pressure expansion(both internal and external pressures are same at all instances) the piston moves slowly.Because the force will be just enough to move the piston.But in your case the force inside will be sufficiently higher so the piston moves faster(i.e. accelerated).I had the same doubt and still doubtfull about it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
White light Hawking radiation If the Hawking radiation is similar in spectrum to a black body, then at what mass of the black hole its radiation would have the same peak energy as the sunlight?
| The equivalent temperature of a black hole (as seen from infinity- since the blackbody radiation will be red-shifted as it moves away from the black hole) is given by $$\frac{1}{8\pi M}$$ in natural units. Or with all the constants in there: $$\frac{\hbar c^3}{8\pi k_BGM}$$
For the Sun's temperature of $5778\rm K$, this corresponds to $2\cdot 10^{19} \rm kg$. Or about 3 millionths the mass of the earth. Impossibly small for a black hole, of course.
Edit: requested in comment
The Schwarzschild radius for the black hole would be $31\rm nm$. Since it's blackbody radiation, the power should be determined by the Stefan-Boltzmann law, so the luminosity is given by: $$ L = 4\pi r^2 \sigma T^4$$
Plugging this in gives a total luminosity of around 800 nanowatts. So very tiny. To find out the lifetime, we can set up an differential equation: $$\frac{dM}{dt}=-\frac{L}{c^2}=-\frac{4\pi r^2\sigma T^4}{c^2}$$
Changing to natural units so this doesn't get super long (note that $\sigma=\frac{\pi^2}{60}$ in natural units) and substituting in $r$ and $T$: $$\frac{dM}{dt}=-\frac{\pi^3(2M)^2\left(\frac{1}{8\pi M}\right)^4}{15}=-\frac{1}{15360\pi M^2}$$
Rearranging this and integrating: $$\int_{M_0}^0M^2dM=-\int_0^t\frac{dt'}{15360\pi}$$
$$\frac{M^3}{3}=\frac{t}{15360}$$ $$t=5120M^3$$
Putting the constants back in: $$t=\frac{5120G^2M^3}{\hbar c^4}$$
For our initial numbers, this gives you $8\cdot 10^{33} \rm yr$. So it comes out to be an extremely dim light bulb that will be around for an unimaginably long time.
For fun: a black hole the mass of the Empire State building would last for thirty years and would start out with a luminosity of 3.2 petawatts, about 500 times the world's power usage, or 100 small nuclear bombs per second. And it would get brighter over the thirty year period.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
In Young's double slit experiment, why are the two theta values equivalent?
I've read several answers on here to similar questions, and I've also looked at several different picture interpretations to no avail. I can't wrap my head around it. I understand that under the assumption of L >> d, both rays from the slits are approximately parallel, but how are the two angles of $\theta$
equivalent? I'm assuming by alternate interior angles somehow, but I can't figure out how. Can someone draw it out for me or something? I'm at a loss.
| If you refer to the yellow-shaded triangle, yes, you're on the right path: it is purely geometric. Excuse me I'm not going to work much more than a cheap paint picture ^^
Steps:
*
*(Green) If the angle $\theta$ is that one, you obviously have 90 degrees minus $\theta$ until the slit-wall.
*(Blue) We are supposing that all rays are parallel.
*(Pink) We draw a perpendicular to the rays.
*(Red) Now focus on the upper triangle. If one angle is 90º and the other is 90º-$\theta$, the remainging one is neccesarily $\theta$ as well.
I hope you can see it now.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Why am I getting two different results in emu and SI unit? I am computing force between two magnetic poles each of one unit pole (in emu) and situated one centimeter apart.
In electromagnetic units:
$$F_{dyne}=\dfrac{p^2}{r_{cm}^2}=\dfrac{1^2}{1^2}=1 dyne$$
where $p$ is pole strength in emu
In SI units:
$$F_{N}=k_A \dfrac{P^2}{r_m^2}=10^{-7} \dfrac{({1.25\times 10^{-7}})^2}{10^{-4}}=1.5625 \times 10^{-17} \neq 10^{-5}N=1dyne$$
where $P$ is that same pole strength in SI units
with $P=1.25\times10^{-7}p$ see here
Now why am I getting two different results in emu and SI for the same configuration?
| I find this topic to be a quagmire.
The SI unit of magnetic pole strength is the ampere-metre with $1\,\rm Am$ equal to $10$ electromagnetic units (emu) of magnetic pole strength.
The relationship is derived here.
The conversion that you were using was for magnetic flux the SI unit of which is the weber and it is the maxwell in emu with $1$ maxwell being equal to one line of force and a unit magnetic pole produces $4\pi$ lines of force.
You will note that the site that you quoted for your conversion gives this relationship between magnetic pole strength and the flux produced in maxwells.
In terms of magnetic flux $1\,\rm weber = 10^8\, \rm maxwell$
Now a unit pole produces $4\pi \,\rm maxwell$ which is $\dfrac {4\pi}{10^8} = 1.2 \times 10^{-7} \,\rm Wb$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $\mathrm{d} \Omega = \mathrm{d} \cos \theta \mathrm{d} \phi$ and not $\mathrm{d} \Omega = -\mathrm{d} \cos \theta \mathrm{d} \phi$? The textbook I am following (Peskin&Schroeder) on QFT takes $\mathrm{d} \Omega = \mathrm{d} \cos \theta \mathrm{d} \phi$. I cannot for the life of me see that its correct. We know that $\mathrm{d}V = r^2 \sin \theta \mathrm{d}r \mathrm{d} \theta \mathrm{d} \phi$. But $\mathrm{d} \cos \theta = - \sin \theta \mathrm{d} \theta$, so $\mathrm{d}V = -r^2 \mathrm{d}r \mathrm{d} \phi \mathrm{d}\cos \theta $ only if $\mathrm{d} \Omega = -\mathrm{d}\cos \theta \mathrm{d} \phi$.
Can you see what's going wrong?
| Usually, you integrate $\theta$ from $0$ to $\pi$, but $\cos \theta$ from $-1$ to $1$ (P&S don't write the integration boundaries explicitly).
$$ \int_0^\pi \mathrm \sin\theta\, d\theta = \int_1^{-1} \mathrm d(-\cos\theta) = \int_{-1}^1 \mathrm d(\cos\theta) .$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Do any particles exist which are NOT entangled with another? Please accept my apologies for a neophyte question
Do we have evidence that suggests whether a subatomic particle can exist which is NOT entangled (correlated) with at least one other particle in the universe?
Are there reasons this cannot be possible?
Are there reasons this would be possible?
| A system of two spins: the state $\newcommand{\ket}[1]{\mid#1\rangle}\ket{\uparrow}\!\ket{\uparrow}$ is not entangled. In case you wonder what that notation means: the first spin up and the second spin up.
As opposed to the state $\ket{\uparrow}\ket{\downarrow}+\ket{\downarrow}\ket{\uparrow}$ which is a superposition of two states: (1) the first spin up and the second spin down, and (2) the first spin down and the second spin up. That is entangled.
Either of those two states are easily produced by modern light sources.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Validation of Work-Energy Theorem Is the work-energy theorem valid when there's an impulsive force during motion of considered body?
For eg: Consider a man jumping from some height into a swimming pool of certain depth, if we apply work energy theorem from his initial position to final position (Change in Kinetic Energy would be 0), Should we consider the impulsive force that water would provide when man hits the surface of water?
Even if for an instant this impulsive force should act in opposite direction of man's motion there is some displacement and once he enters water buoyancy force would take over, Am I Getting It Correct?
| *
*You are totally correct about your reasoning. Remember, the work-energy conservation theorem is always valid.
*Thus, whenever you see a change in the total energy of a system, it means the energy goes into/out of that system. But you can always include more "objects" into your system and observe that the conservation of energy still holds.
*Back to your question: The moment the man hits the water, his kinetic energy decreases because his velocity decreases. The escaped kinetic energy transfers into internal and kinetic energy of the water in the swimming pool.
*When buoyancy force takes over, the potential energy of the man-swimming pool system increases because the man is being pushed upward. But some water molecules are pushed downward at the same time, and also the pressure the body of water exerts on the pool walls increases. Of course, there are many other processes happening at that moment, but remember the conservation of energy is always true.
*When the man's velocity decreases to zero, note that the height of the water body in the swimming pool increases by a bit. Also, the friction force that took place between the man and the water increases the internal energy of the system, making the water "hotter". Again, there are many other processes that has happened during that period of time but the conservation of energy must always holds true.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Spaghettification inside a black hole? Taylor and Wheeler in "Exploring Black Holes" calculate that the spaghettification time, measured from feeling a 1g tidal difference head-to-toe to disintegration at the singularity, is a constant, a little less than one second. For small black holes (3 solar mass) this happens well outside the event horizon. But for large black holes it is inside. But how is one "spaghettified" inside the black hole. The singularity is time-like relative to both your head and feet - so it is not a different distance away and so how does the tidal force arise?
| Taylor & Wheeler's spaghettification time is valid for the case of "raindrops", a particular motion where the astronaut fell from rest far away from the black hole (as Brent Meeker commented).
As for inside the horizon, maybe it is unhelpful for you to focus on the description of Schwarzschild $t$ and $r$-coordinates swapping roles (as John Rennie commented). Understand that any astronaut anywhere measures 3 dimensions of space and 1 dimension of time, in their local vicinity (the technical term is orthonormal frame or tetrad). The spaghettification is calculated relative to the astronaut's own space and time.
Update: the details involve writing down some vectors. I'll work in Schwarzschild[-Droste] coordinates. The $r$-coordinate vector in this case is $(0,1,0,0)$, which indeed is timelike inside the horizon. Now the raindrop 4-velocity is
$$\Big(\frac{1}{1-2M/r},-\sqrt{2M/r},0,0\Big)$$
The radial direction for the raindrop is not $(0,1,0,0)$. Instead we seek a spatial vector orthogonal to the 4-velocity. This is:
$$\Big(-\frac{\sqrt{2M/r}}{1-2M/r},1,0,0\Big)$$
which is indeed spatial, so Taylor & Wheeler's calculation is well-grounded.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
With a very intense light on a black object, will it reflect? I was wondering about the nature of object's colour. I know that an object get its colour from the absorption of visible electromagnetic radiation, reflecting all the other wavelength. But if we take the case of a black object that absorbs every visible light, I know that photons will be absorbed by some molecules, then will be re-emitted with less energy because some of the energy has been "passed on" the molecule, in which it goes faster, thus, giving us heat. So, if we put a very intense light, does it simply change the amplitude and the object is still black or does the wavelength shift and gives a different result?
My guess would be that no matter the amplitude, the wavelength are the same and thus the black object will still appear black but I want to be sure with, maybe, a more scientific explanation? If you have also links of some sort, I would gladly appreciate it!
| Yes if it's a perfect black body then it would be black. You should read about https://en.m.wikipedia.org/wiki/Black_body
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Are the field renormalization factors infinite or finite?
*
*We know that in quantum field theory we include infinities at each order of the perturbative expansion of the renormalization $Z$ factors about the coupling constant $\lambda$ to absorb the divergences of the loop diagrams, so it seems $Z$ must be infinite.
*On the other hand, if we turn the coupling constant $\lambda$ to zero, the interacting theory then becomes a free theory, so the $Z$ must be $1$ in this case. This means that the $Z$ should be a small variation of $1$ when $\lambda$ is small.
*Moreover, according to the Kallen-Lehmann spectral form we must have $Z \in [0, 1]$.
Combining the above arguments, does it mean that although there are infinities in each $\lambda^n$ order term in the expansion of $Z$, their total sum turns out to be a finite number which is a small variation around $1$? That is, when people say the renormalization $Z$ factors are infinite, do they actually mean that the $Z$'s are infinite at each order?
| As you are familiar, the idea is to introduce renormalised parameters and fields in terms of bare quantities, related by various renormalisation factors, $Z_i$.
We then expand $Z_i$ around some classical tree level values; this corresponds to $Z_i = 1$ followed by an infinite series of corrections $\delta_i$, so $Z_i = 1 + \delta_i$.
In applying perturbation theory to calculate amplitudes we find that the $\delta_i$ counterterms contain terms with whatever chosen regulator which means they are infinite in the final limit. For example, the counterterm from computing the photon self-energy gives,
$$\delta_3 = -\frac{e^2_R}{6\pi^2}\frac{1}{\varepsilon} - \frac{e^2_R}{12\pi^2}\ln \frac{\tilde \mu^2}{m^2_R}$$
which is clearly infinite as $\varepsilon \to 0$. So in general, we have,
$$Z_i = \mathrm{finite} + \sum_{n=1}^\infty \frac{c_n}{\varepsilon^n}.$$
The S-matrix itself is generally an asymptotic series, which means the scattering amplitudes themselves may not converge to anything finite. As for whether the sum can converge to a $f(\varepsilon)$ that is finite as $\varepsilon \to 0$ is as far as I know not addressed in most QFT books.
However, if $f(\varepsilon) \to \mathrm{finite}$ as $\varepsilon \to 0$, that makes $Z_i$ finite, implying there were no divergences that needed to be absorbed if we were able to sum the entire perturbation series.
But we know that the reason for renormalisation is not the fact that we cannot compute all the terms of the S-matrix, hence, in my mind at least, a contradiction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How torque about every point on Axis is same? I read somewhere that torque about every point on an axis is same. But I am really confused about how this can be. Please help me and give an satisfactory answer
| This can be proven using vector algebra.
Let a straight line in space be your axis; a straight line in space can be defined by a point through which it passes and a vector you scale up and down in order to find every point in a specific direction; let these two object be $A_0$ and $\vec{v_0}$ respectively. Your line is the locus:
$$ \mathcal{L} = \{ P \in \mathbb{R}^3 | P = A_0 + t \cdot \vec{v_0}, t \in \mathbb{R} \}$$
Let $\vec{F}$ be the vector of a force acting on the point $P_0$. Now, let $\vec{r}$ be a vector from any point on the axis (say $A_0$) to the point $P_0$.
The torque about the axis is defined as:
$$ \tau = \frac{(\vec{r} \times \vec{F}) \cdot \vec{v_0}}{||\vec{v_0}||} $$
Note that this is not a vector quantity. In fact, this is only the length of the torque vector; the vector itself has the same direction of the straight line. We can show this quantity does not depend on which point we choose from the straight line (meaning $\vec{r}$ does not need to stem from $A_0$), but the torque of a force on the axis has a value of zero. Imagine the point $P_0$, the point of application of the force, is on the axis. It follows that for any point we take as the "tail" of $\vec{r}$, this vector will be parallel to $\vec{v_0}$.Then the vectorial product of $\vec{r}$ with the force $\vec{F}$ is going to result in a third vector perpendicular to $\vec{r}$ and $\vec{v_0}$. But then, from the definition of scalar product, the final result is zero.
Note that we haven't taken any specified point on the axis, and the same result holds for all points. Similarly, you can show that any point on the axis will give you the same value for the torque.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Why are position and momentum space examples of Pontryagin duality? https://en.wikipedia.org/wiki/Position_and_momentum_space
https://en.wikipedia.org/wiki/Pontryagin_duality
I am trying to understand logic behind the uncertainity principle. And as far as I understand, it follows mathematically if we assume that wave function in momentum space is Fourier transform of the wave function in position space. I tried to dig in and find out why they should be related so, and the only explanation I could find out was Pontryagin duality.
| Practically speaking, the full machinery of Pontryagin duality is way more advanced than physicists need to understand the uncertainty principle. There are several ways to "derive" that the momentum-space wavefunction is the Fourier transform of the position-space wavefunction, which depend somewhat on your choice of starting postulates. Here's one common path:
One common starting fundamental postulate is the commutation relation $[\hat{x}, \hat{p}] = i \hbar.$ The most common position-space representation of this commutation relation is $\hat{x} \to x,\ \hat{p} \to -i \hbar \frac{\partial}{\partial x}$. In this representation, taking the inner product of $\langle x |$ and the eigenvalue equation $\hat{p} |p\rangle = p | p \rangle$ gives the differential equation
$$-i \hbar \frac{d\, \psi_p(x)}{dx} = p\, \psi_p(x),$$
which has solution $\psi_p(x) = \langle x | p \rangle \propto e^{(i p x)/\hbar}$. Then to express an arbitrary state $| \psi \rangle$ in the momentum basis, we can use the resolution of the identity
$$ \psi(p) = \langle p | \psi \rangle = \int dx\ \langle p | x \rangle \langle x | \psi \rangle \propto \int dx\ e^{-ipx/\hbar} \psi(x),$$
which is just the Fourier transform. This generalizes straightforwardly into higher dimensions.
BTW, the fact that position-space and momentum-space wavefunctions are Fourier transforms of each other (or more precisely, can be chosen to be Fourier transforms of each other) gives some nice intuition for the uncertainty relation but isn't actually necessary to derive it. All you need is the commutation relation, as I explain here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
What sound frequency can be heard the greatest distance by humans? What sound frequency can be heard the greatest distance by humans? Assuming a pure tone, single frequency, same source SPL (dB) for each frequency, outdoors with no obstacles between source and listener. I believe the answer would be the result of the combining the effects of atmospheric attenuation as a function of frequency, humidity, and temperature and the perceived loudness by humans as a function of frequency. I think the answer would be in the range of 2kHz-3kHz.
| You ear has a frequency response peak at about 3kHz, meaning that this frequency will be easiest to hear over long distances. In comparison, your ear's response to bass frequencies is much weaker, and to make them as audible as a distant sound source at 3kHz requires much more acoustic power. This among other things is why a bass guitar amplifier needs about 5 times the power rating as a guitar amplifier to seem approximately as loud to your ear.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why is it "bad taste" to have a dimensional quantity in the argument of a logarithm or exponential function? I've been told it is never seen in physics, and "bad taste" to have it in cases of being the argument of a logarithmic function or the function raised to $e$. I can't seem to understand why, although I suppose it would be weird to raise a dimensionless number to the power of something with a dimension.
| There is no reason to give metaphysical arguments like "It's not possible to add a meter with a meter squared". This is a purely mathematical issue.
Physics is, first and foremost, mathematical models. Mathematics doesn't care about units. Mathematics only involves pure numbers.
If, in some universe, we need the law $x=e^t$ to model the distance traveled by a particle, then it's a perfectly valid mathematical model of the universe. You can even have laws like $\frac{dx}{dt}=x+t$, which the involves addition of distance and time.
Now, what one has to realise is that, even in these universes, there is the freedom to choose units of space and time. When we say $x=e^t$ or $\frac{dx}{dt}=x+t$, these laws are relationships among numbers that are measured after making a choice of units.
The real problem is that the form of laws like these depends on the choice of units. $x=e^t$ can only be true for some specific choice of the time-unit. If we use a time-unit of half the size, we have to make the substitution $t'=2t$. The new laws will be $x=e^{\frac{t'}{2}}$, and $2\frac{dx}{dt'}=x+\frac{t'}{2}$. In a general choice of units, constants, like $2$ here, will show up.
So, if we want to write a general unit-invariant version of these laws, we'll have to introduce these "experimentally-determined constants" into our expression of the law. After introducing the constants, we have:
$$c_1x=e^{c_2t}$$
$$\frac{dx}{dt}= c_1x+c_2t$$
For a fixed choice of units, these constants have to be experimentally determined. These constants make the units of the terms match. When the units match, all the terms in the expression get scaled by the same amount under a change of units, and thus the form of the law is unit-invariant.
This is what rules like "You can't exponentiate dimensional quantities" and "You can't add different units" are designed to achieve. Also, laws like this are not completely artificial. Special relativity does have a $x^2-c^2t^2$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 7,
"answer_id": 4
} |
Periodic multi-layer scattering of neutrons I am trying to understand the reflectivity plot on slide 26 of Neutron optics,Soldner lecture.
1.Is the peak from $\theta$=0.0 to 0.4 due to total external reflection from the first upper surface?.
2.There is another peak at $\theta$=1.0. Is it because of Bragg's interference?(As given in slide 23 of Stewart's lectures.
3.Why are there alternative peaks between $\theta$=0.4 and 1.0?
| Here are the answers:
*
*Yes.
*Yes... But is it better called as Darwin's plateau.
*Those are fine structure arising from multiple wave interference (and are not seen experimentally) (see page 16 of Analysis and design of multilayer structures
for neutron monochromators and supermirrors ,S. Masalovich )
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Integration measure in terms of the Levi Civita tensor For a course in General Relativity I had to calculate the volume of the unit 2-sphere $S^2$, but I have some trouble with understanding the concept.
At first I calculated the volume of a sphere with radius $R=1$, but somehow this should only make sense when I'm in a three dimensional space. So my question is, is my calculation correct or do I indeed have to end up with the surface of the sphere.
The integral for the volume is given by:
$$ V[M] \equiv \int_M \epsilon, \\
\epsilon \equiv \sqrt{ |g|} d^n x,$$
where $g$ is the determinant of the metric $g_{\mu \nu}$ as defined in Carrol's "Spacetime and Geometry An Introduction to General Relativity".
Now I can calculate $|g| = r^4 \sin^2 (\theta)$ and plug this into the integral. Thus this gives me:
$$ V[S^2] = \int_{S^2} \epsilon = \int_{S^2} r^2 \sin(\theta)\ dr \ d\theta \ d\phi = \int_0^1 r^2 dr \int_0^{\pi} \sin (\theta) d\theta \int_0^{2 \pi} d\phi=\frac{4 \pi}{3}$$
But I don't understand what the volume of the 2-sphere should mean and if this is the correct calculation. What I find especially confusing is the notation $ d^n x$
| The "volume" of a 2-dimensional space is what we would commonly refer to as the area. Note that for a 2-sphere we have $\epsilon = r^2\sin\theta d\theta d\phi$, where $r$ is just a constant, usually taken to be unity. The integration thus yields
$$
\int_M\epsilon = r^2\int_0^\pi\sin\theta d\theta\int_0^{2\pi}d\phi = 4\pi r^2,
$$
or the surface area of the sphere.
To clarify: the differential geometric concept of volume is to be taken as a measure of the "size" of the space, regardless of dimensionality. Only in a 3-dimensional manifold does it correspond to what we in everyday life think of as a volume.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does a permanent magnet attract a positively charged rod? I thought that because the charge on the rod is static, there wouldn't be an interaction with the magnetic field, however the answer to the question states that both poles of the magnet would attract the rod. Thanks!
Edit: The question does not state the material the rod is made of. Here's the exact question.
A positively, electrically charged rod is brought near a permanent magnet. What will be observed?
| Note that a charged object can attract a neutral conductor by inducing dipole polarization; the fact that your neutral conductor is magnetic is a red herring.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why don't electric charges get attracted to current-carrying wires when they're at rest, when they do feel a force if they're moving? Why does charge placed at rest near a current-carrying wire experience no force, but if the charge starts moving then it gets attracted toward the current-carrying wire?
Why doesn't charge get attracted to the current carrying wire despite it being at rest?
| It is a fact of nature encoded within the system of Maxwell's equations which describe electromagnetic interactions classically.
The definition of charge comes from observations, as well as the definition of the magnetic field.
That a charge is attracted to another charge is a law of nature. That the magnetic dipoles attract and repulse according to their poles is also a law of nature. It can be experimentally shown that a current carrying wire generates a magnetic field, but a charge at rest with the wire does not feel any force. Once the charge is moving it also generates a magnetic field and thus an interaction between a moving charge and the magnetic field occurs.
Going to the underlying particle level , a current is moving charges in a conductor:
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Gas Laws: Why is PV directly proportional to mT? My book mentions that the three informal gas laws (Boyle's, Charles', and Gay-Lussac's) can be combined into a more general relation PV ∝ mT (the precursor to the Ideal Gas Law).
Where:
P is pressure, V is volume, m is mass (taken as a measure related to quantity of gas molecules), and T is temperature on the absolute scale.
*
*From Boyle's Law: P ∝ 1/V
*From Charles' Law: V ∝ T
*From Gay-Lussac's Law: P ∝ T
*From simple observation of a balloon being inflated: m ∝ V
All these can be surmised from PV ∝ mT by letting some of these state variables be constant. But when we let V and T be constant we get P ∝ m and this is a relationship I don't understand. It's not any law that I was able to find in my textbook or online and I don't get it conceptually.
But for PV ∝ mT to be true, P ∝ m must also be true. So my question is what is the relationship between pressure and mass?
| P ∝ m conceptually: for a given volume and temperature, if you increase the mass (get in more particles) the pressure will become higher. Simple, isn't is?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does the cut-off $\Lambda$ stand for in the theory of QED? The bare electron mass $m_0$, in QED, changes as $$m_0\to m=m_0+\delta m\Big(\frac{\Lambda}{E}\Big)$$ where high momentum modes from $E$ to $\Lambda$ has been integrated out.
What scale does the cut-off $\Lambda$ stand for in the theory of QED and why? Is it the top quark mass $\Lambda=m_{top}$, the GUT scale $\Lambda=M_{GUT}$ or the Planck scale $\Lambda=M_{Pl}$? I never understood which scale corresponds to the correct cut-off of a theory.
| There is usually no unique cutoff scale $\Lambda$ in renormalization. The reason is that generally we don't know what the ultimate microscopic physics is.
So the rationale is to pick any scale $\Lambda$ to be much, much larger than any physical scale of interest (particle mass or energy $E$ of an experiment you're doing) and then adjust couplings as you lower $\Lambda$ - the usual RG flow story.
In some cases you do get information about the range of validity of a theory. Say you have a theory with some fields $\Phi_i$ which couple to a heavy particle $X$ of mass $M$, and you integrate out the heavy particle. Then you obtain an effective action for the $\Phi_i$ fields which is valid up to energies $E < M$. This is reflected by the fact that you generate dimensionful couplings of size $M$ to the appropriate power. A physically interesting example is the chiral Lagrangian for pion physics:
$$L = \frac{f_\pi^2}{2} \text{Tr}(\partial_\mu \Sigma \partial_\mu \Sigma^\dagger) = \frac{1}{2} | \partial \vec{\pi}|^2 + \frac{1}{f_\pi^2} \left[\vec{\pi}^2(\partial \vec{\pi})^2 - (\vec{\pi} \partial \vec{\pi})^2 \right] + \ldots
$$
All couplings scale like the pion decay constant $f_\pi$, and this action is useful to compute $\pi \pi$ scattering at low energies but breaks down below $\Lambda_\text{QCD}$. QED is however not of this form: it has a single, dimensionless coupling $\alpha$, which clearly doesn't carry any information about scales. Moreover, QED isn't a theory of quarks, gravity or weak interactions, so there's no way to tie $\Lambda$ to $m_\text{top}$, $M_\text{GUT}$ or $1/\ell_\text{pl}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why do quantum effects of gravity become important at the Planck scale? The standard heuristic argument for why quantum effects of gravity become important at the Planck scale is to consider the length scales at which both quantum field theory (QFT) and general relativity (GR) become crucial in order to explain physical phenomena.
For QFT this is when the length scale is of order the Compton wavelength of a particle, $$l_{c}=\frac{h}{mc},$$ since if one attempts to confine a particle within this length then it is possible for pair creation to occur and so the concept of a particle breaks down and QFT is required.
For GR, this is when the length scale is of order the Schwarzschild radius of a particle, $$l_{s}=\frac{2Gm}{c^{2}},$$ since compressing the mass of a particle to within this radius results in the formation of a black hole which requires GR to understand its behaviour.
As such, one expects that when these to lengths are of the same order, i.e. $l_{c}\sim l_{s}$, that the quantum effects of gravity become important. This occurs when $$\frac{h}{mc}\sim\frac{2Gm}{c^{2}}\Rightarrow m^{2}\sim\frac{hc}{2G}\sim m_{P}^{2}$$ That is when the mass of the particle is of the same order as the Planck mass.
What I'm unsure about, is why the Compton wavelength? Why not the de Broglie wavelength as this is the length scale at which the quantum nature of an object becomes evident?
Is it simply because QFT is consistent with special relativity and standard quantum mechanics is not and so it is the scale at which QFT becomes crucial that sets the scale for quantum gravity?
| User anna v has already given a correct answer. In this answer we try to summarize.
In a nutshell, the Planck scale of quantum gravity is determined by the 3 physical constants $G$, $c$ and $h$.
*
*When the wavelength $\lambda$ becomes of the order of the Schwarzschild radius, the rest energy $mc^2$ becomes of the order of the gravitational energy $Gm^2/\lambda$.
*When the wavelength $\lambda$ becomes of the order of the Compton wavelength, the rest energy $mc^2$ becomes comparable with the energy $hc/\lambda$ of a quantum.
In contrast, the de Broglie wavelength $h/|{\bf p}|$ lacks information about relativity theory, and fail to identify the pertinent characteristic scale.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
How to find the direction of velocity of a reference frame where two events are simultaneous in case of a space-like interval Suppose in a inertial reference frame $S$, an event $A$ occurs at $(ct_A, x_A, y_A, z_A)$ and event $B$ occurs at $(ct_B, x_B, y_B, z_B)$.
Now the invariant interval of these two events is,
$$I = -c^2 (t_A - t_B)^2 + (x_A - x_B)^2 + (y_A - y_B)^2 + (z_A - z_B)^2 = -c^2 \Delta t^2 + \Delta \bar x^2,$$
where I'm using the $(-, +, +, +)$ metric.
Now there can be $3$ particular cases of interest corresponding to time-like, space-like and light-like events.
For $I = 0 \implies c^2 \Delta t^2 = \Delta \bar x^2$, events are light-like.
For $I < 0 \implies c^2 \Delta t^2 > \Delta \bar x ^2$, events are time-like and a reference-frame $\bar S$ exists(accessible by appropriate Lorentz Transformation) for which these two events occur at the same location. The velocity(magnitude and direction) can be computed.
For $I > 0 \implies c^2 \Delta t^2 < \Delta \bar x^2$, events are space-like and a a reference frame $\bar S$ exists(again accessible by appropriate Lorentz Transformation) for which these two events are simultaneous.
I know how to calculate the velocity(direction and magnitude) of the $\bar S$ frame relative to the $S$ frame in case of a time-like event. I also know how to calculate the magnitude of velocity of the $\bar S$ frame relative to the $S$ frame for a space-like event.
How to find the direction of the $\bar S$ frame relative to $S$ for a space-like event?
|
The answer concerns the case $\:I > 0$, so let in a frame $\:\mathrm{S'}\:$ two events taking place by a space interval $\:\Delta\mathbf{x}^{\boldsymbol{\prime}}\:$ and time interval $\:\Delta t^{\boldsymbol{\prime}}\:$ apart with
\begin{equation}
\left\Vert\dfrac{\Delta\mathbf{x}^{\boldsymbol{\prime}}}{\Delta t^{\boldsymbol{\prime}}} \right\Vert^{2} >c^{2}
\nonumber
\end{equation}
These two events are causally independent.
Now, we seek for frames $\:\mathrm{S}\:$ moving with respect to $\:\mathrm{S'}\:$ wherein these two events happen simulta- neously. Without loss of generality let such a system $\:\mathrm{S}\:$
be in Standard Configuration to $\:\mathrm{S'}\:$ and moving with velocity $\:\mathbf{v}\:$ with respect to it. Then for the Lorentz Transformation between them we have (see Figure)
\begin{align}
\Delta \mathbf{x} & = \Delta\mathbf{x}^{\boldsymbol{\prime}}+(\gamma-1)(\mathbf{n}\boldsymbol{\cdot} \Delta\mathbf{x}^{\boldsymbol{\prime}})\mathbf{n}-\gamma \mathbf{v} \Delta t^{\boldsymbol{\prime}}
\tag{01a}\\
\Delta t & = \gamma\left( \Delta t^{\boldsymbol{\prime}}-\dfrac{\mathbf{v}\boldsymbol{\cdot} \Delta \mathbf{x}^{\boldsymbol{\prime}}}{c^{2}}\right)
\tag{01b}\\
\mathbf{n} &=\dfrac{\mathbf{v}}{\Vert\mathbf{v}\Vert}
\tag{01c}
\end{align}
For these two events to be simultaneous in the frame $\:\mathrm{S}\:$, equation (01b) yields
\begin{equation}
\Delta t =0 \quad \Longleftrightarrow \quad \Delta t^{\boldsymbol{\prime}}-\dfrac{\mathbf{v}\boldsymbol{\cdot} \Delta \mathbf{x}^{\boldsymbol{\prime}}}{c^{2}}=0
\tag{02}
\end{equation}
This means that the frame $\mathrm{S}$ must move with velocity $\mathbf{v}$ such that its projection $\mathbf{v}_{\Vert}$ on the space interval $\Delta\mathbf{x}^{\boldsymbol{\prime}}$ satisfies
\begin{equation}
\Vert \mathbf{v}_{\Vert}\Vert =\dfrac{c^{2}}{\left\Vert\dfrac{\Delta\mathbf{x}^{\boldsymbol{\prime}}}{\Delta t^{\boldsymbol{\prime}}} \right\Vert} (<c)
\tag{03}
\end{equation}
So there exists an infinite number of velocities.
A choice parallel to $\Delta\mathbf{x}^{\boldsymbol{\prime}}$ is
\begin{equation} I > 0 :\quad \mathbf{v}=\dfrac{c^{2}}{\left\Vert\dfrac{\Delta\mathbf{x}^{\boldsymbol{\prime}}}{\Delta t^{\boldsymbol{\prime}}} \right\Vert^{2}}\dfrac{\Delta\mathbf{x}^{\boldsymbol{\prime}}}{\Delta t^{\boldsymbol{\prime}}}\,, \quad \Vert \mathbf{v}\Vert =\dfrac{c^{2}}{\left\Vert\dfrac{\Delta\mathbf{x}^{\boldsymbol{\prime}}}{\Delta t^{\boldsymbol{\prime}}} \right\Vert} < c \tag{04} \end{equation}
For a 3D version of above Figure click here :
Figure 3D
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Early atmospheric Watt steam engine - how does the steam move from the cylinder to the condenser? Below is a picture of one of the earliest designs of Watt's steam engine.
The basic principle of operation is this:
*
*The weight which is attached to the beam $E$ pulls the piston $P$ up, sucking steam from the boiler into the cylinder $B$.
*In the condensor $C$, the steam is cooled and condenses to water. This greatly lowers the pressure creating a pressure difference between the atmosphere and inside of the condenser and cylinder. So the piston is pushed down, doing work.
My question is, what makes the steam move from the cylinder to the condenser?
| tfb's answer correctly describes the working cycle - just wanted to capture some of the extra questions in the comments.
Ideally you want to fill the cylinder with steam to lift it and then have the steam disappear creating a vacuum to pull the cylinder down. In Newcomen's original engine this was done by spraying cold water into the cylinder, cooling the steam and creating a partial vacuum - air pressure on the top of the cylinder than pushes it down. (Incidentally this was the power stroke because they wanted the other end of the beam to go up to lift water out of the mine.)
The problem with this is that you cool the cylinder walls on each cycle, so the fresh hot steam for the next stroke immediately starts to condense on the cold walls and it isn't until the walls have heated above boiling point that any steam can start to lift the cylinder.
Watt's breakthrough was the idea that since steam is a fluid you can remove it from the cylinder by connecting it to a separate vacuum. After the steam has filled the second tank (the condensor) you close the valve and spray cold water into the condensor, condensing the steam and creating the vacuum ready for the next stroke. (You also have to occasionally pump out the water from the condensed steam.)
By doing this you keep the cylinder hot and the condensor cold so you don't waste energy and ultimately fuel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Difference between and diffusion and heat equations? I read everywhere that diffusion and heat equations are similar. The same differential equations can be solved for both.
Consider a finite one-dimensional diffusion or heat transfer where one end is insulated and the other end is kept with a constant flux.
The boundary conditions are the same for diffusion or heat transfer: flux is zero at one end and constant at the other.
$$\frac{\partial u}{\partial x} = C; x = 0$$
$$\frac{\partial u}{\partial x} = 0; x = l$$
and the initial state of
$$u(x,0) = 0\ ||\ u(x,0) = K $$
It can be any constant value (initial concentration or temperature).
In the case of heat transfer, the temperature can rise indefinitely. But in the case of diffusion, there is a capacity limit.
How should the diffusion and heat equations be solved for these boundary conditions? Is the solution is the same in both cases?
| Your observation is good and correct: they are same. Both are diffusions; one diffuses material and the other diffuses heat.
The limit you mentioned also exists for heat transfer as well when you apply fixed temperature as boundary conditions; the temperature cannot be higher that its boundary temperature.
For diffusion, you cannot apply a infinite high concentration of a species. So the limit is not because of equation but because of the boundary value.
The similarities between the two processes were recognized by our ancestors. That's why we sometimes see Schmidt numbers and Prandtl numbers, with which, you know one process, you can get the other without solving the differential equations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Where does the interference pattern or diffraction pattern due to a single or double slit placed in front of a light source form? I have seen that when we use a spectrometer for performing an Optics experiment involving a single slit or double slit to study , say the Fraunhofer Diffraction pattern or some interference pattern, we use Schuster's method to focus the fringe pattern on the focal plane of the eyepiece of the telescope. But, I can see the fringe pattern (just that it has formed, not with such high resolution like the telescope) by naked eye through the narrow slit(s), also.
So my question is:
Where does the interference pattern or diffraction pattern due to a
single or double slit placed in front of a light source form, in this
case?
Does it form at infinity, as the theory says; or at some other place? Like the diffracting edges of the narrow slit? Or the focal plane of the eyepiece of telescope? Or does it vary depending on how I look at it?
I couldn't find a proper answer anywhere.
| The fringes which you have described are non-localised, they occur everywhere where there is which has passed through the slit(s).
When you use your eye to observe the fringes, you are observing them from the slit(s) being focussed on the retina of the eye, just like the telescope of the spectrometer focussing the light in the focal plane of the telescope objective lens.
The eye then acts as a magnifying glass to make the fringes appear larger.
Without the telescope your eye probably is focussed on the slit(s) as it is very difficult to focus on "thin air".
However, you could help the eye do this by placing a translucent sheet between the slit and the eye to observe the fringes formed in the vicinity of the paper.
If you used a laser as the source of light it is easy to see the fringes wherever you put a screen.
This photograph of the waves from two vibrating sources in a ripple tank show the the non-localised formation of interference fringes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interpreting Schrödinger Equation's solution for a free particle Let's say we have the time-independent Schrödinger Equation for a free particle (without potential):
$$-\frac{\hbar}{2m} \frac{d^2 \psi}{dx^2} = E\psi$$
Whose general solution is:
$$\psi(x) = Ae^{ikx} + Be^{-ikx}$$
Where $k = \frac{2\pi}{\lambda}$ (i am supposing just one dimension, hence the use of a scalar quantity). Let's say that i have the following contour values: $\psi(0) = 0$ and $\psi(L) = 0 $ (First Question: What is the meaning of this? I know that this means that there is no probability of finding the particle in either $x=0$ or $x=L$. But how does this relate physically to the system and why can i impose this? A particle with no force shouldn't be an uniform distribution over $[0,L]$? (with me not being allowed to set these contour values). To help solving the contour value, i will put the solution in real form:
$$\psi(x) = A\cos(kx) + B\sin(kx)$$
Using the first condidtion:
$$\psi(0) = 0 \rightarrow A\cos(0) + B\sin(0) = 0 \rightarrow \boxed{A = 0}$$
Now for the second condition:
$$\psi(L) = 0 \rightarrow B\sin(kL) = 0 \rightarrow kL = \Big(n+\frac12\Big)\pi$$
$$ k = \frac{\big(n + \frac12 \big)\pi}{L}$$
Finally, the solution is:
$$\psi(x) = B \sin \Big[\frac{(n+\frac12)\pi}{L}x\Big]$$
Now, the second question is: What is the meaning of $n$? If i choose, for example, $n=1$. What does that mean? Why does the potential and the contour values can't tell us how the system will evolve? I know that $n$ is analogous to the "harmonics" of a wave. But i can't understand how does that relate to a particle in QM.
| As to your first question: the meaning is that the particle is not a free particle after all. This is the problem of a particle in a one-dimensional infinitely deep square potential well: $\Phi=0$ for $x\in[0,L]$ but $\Phi=\infty$ otherwise. Then (by Schrödinger's equation) $\psi=0$ at those points where $\Phi=\infty$. Thus the particle is only free to move inside the potential well, but not outside.
As to your second question: The meaning of $n$ is that there are only a finite, countable number of independent solutions to Schrödinger's equation with $n$ the index into those solutions. $n$ is called their quantum number. If you work out the energy of each solution
$$
E_n = \frac{\hbar}{2m}\left[\frac{(n+\frac{1}{2})\pi}{L}\right]^2
$$
you see that it increases with $n$, but also that $E_0>\Phi=0$, the ground-state energy. A general wave function $\psi(x)$ can be decomposed into a superposition of these discrete wave solutions (aka a Fourier series).
Your question regarding evolution requires to solve the time-dependent Schrödinger equation, which you haven't.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
A single-component simple system thermodynamics The following equation is taken from Callen's Thermodynamics, page 39:
$$(\frac{\partial u }{\partial s})_v=(\frac{\partial u }{\partial s})_{V,N}$$
where $u = U/N$, $s = S/N$ and $v = V/N$.
The notation is conventional.
My question is:
The author doesn't make any assumption of constant $N$ in this context.
Why would constant molar volume imply the equality?
I feel so confused.
| We know that
$$ dU = TdS - pdV + \mu dN$$
If we assume that the internal energy $U$ is an extensive, homogeneous function of degree 1, then it follows that
$$ U = TS - pV + \mu N$$
Now consider the quantity $u \equiv U/N$. We would have that
$$ du = \frac{1}{N}dU - \frac{U}{N^2}dN= \frac{TdS}{N} - \frac{pdV}{N} + \frac{\mu}{N}dN -\frac{u}{N}dN$$
If we further define $v\equiv V/N$ and $s\equiv S/N$, note that
$$ dS = d(sN)= Nds + s dN$$
so after a bit of algebra,
$$du= Tds- pdv+\frac{sT-pv+\mu-u}{N}dN$$
However, we already said that
$$ u =\frac{U}{N}=\frac{ST - pV + \mu N}{N}= sT - pv + \mu$$
so the last term vanishes, and we find that
$$ du = Tds - pdv$$
so it follows that
$$ T = \left(\frac{\partial u}{\partial s}\right)_v$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Books on non-perturbative phenomena in quantum field theory I am looking for any good places (preferably textbooks) to study about introductory non-perturbative phenomena in Quantum field theory.
Any suggestion will be appreciated.
| The following books assume some acquaintance with perturbative quantum field theory. Together they cover a very wide spectrum of nonperturbative techniques for very different situations.
*
*E. Calzetta and B. Hu. Nonequilibrium Quantum Field Theory. Cambridge Univ. Press (2008). A book on nonperturbative quantum field theory at finite time and finite temperature.
*Y. Frishman and J. Sonnenschein. Non-Perturbative Field Theory. Cambridge Univ. Press (2010). A book on nonperturbative quantum field theory with emphasis on 2-dimensional exactly solvable models.
*M. Shifman. Advanced Topics in Quantum Field Theory. Cambridge Univ. Press (2012). A book on nonperturbative quantum field theory with emphasis on supersymmetry.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 2
} |
Is the gravitational field stronger in the transverse plane of a mass than along its axis of propagation? Is the gravitational field stronger in the transverse plane of a mass than along its axis of propagation? I read somewhere that it was but cannot find the reference again. That is, for a mass traveling at very high velocity, or any velocity I suppose, I read that the gravitational force was stronger in the transverse plane of the mass than it is along its axis of propagation, and wanted to know if this was correct.
| If "transverse plane of mass" means the gravitational attraction perpendicular to the mass's velocity, then you are correct.
There is a weak field approximation to GR call Gravitoelectromagnetism in which there is an $\vec{E_G}$ and $\vec{B_G}$ which obey equations similar to Maxwell's equations for $\vec{E}$ and $\vec{B}$ of electromagnetism. $\vec{E_G}$ is the acceleration caused by a mass (eg: for a mass M at rest $\vec{E_G}=\frac{-GM\hat{e_r}}{r^2}$), and $\vec{B_G}$ is the angular velocity $\vec{\omega}$ that a spinning mass with angular momentum causes to other objects.
$\vec{E_G}$ and $\vec{B_G}$ transform just like $\vec{E}$ and $\vec{B}$ when viewed from a velocity boosted frame. Therefore, if we boost an at rest mass that is not spinning (ie: $\vec{B_G}=0$), we get
$$
\begin{align}
\
E'_{G \ parallel} &=E_{G \ parallel} \\
E'_{G \ perpendicular} &=\gamma E_{G \ perpendicular}
\end{align}
$$
The field lines of $\vec{E'_G}$ (ie: acceleration) are bunched out perpendicular to the boost direction. I think this is the effect you read about.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is classical spin the spin-$\infty$ representation of $SO(3)$, not the spin-$1$ representation of $SO(3)$? Given a classical spin model,
$$H=\mathbf{S}_1\cdot \mathbf{S}_2\tag{1}$$
where $\mathbf{S}_i=(\sin\theta_i \cos\phi_i,\sin\theta_i \sin\phi_i,\cos\theta_i), i=1,2$ is the classical spin.
Given a quantum spin-$s$ model,
$$\hat{H}=\hat{\mathbf{S}}_1\cdot \hat{\mathbf{S}}_2\tag{2}$$
There is a saying that classical spin is equivalent to spin-$\infty$ representation of $SO(3)$, because in spin-$s$ rep. there is only $2s+1$ discrete $z$-direction eigenstates, and classical spin has continous direction.
My questions are following:
*
*Although the argument seems reasonable, the classical spin is a $3$-component vector $ (\sin\theta_i \cos\phi_i,\sin\theta_i \sin\phi_i,\cos\theta_i) $ and from my knowledge it must be a spin-$1$(defining) representation of $\rm SO(3)$. How to rigorously explain classical spin should be spin-$\infty$ rep.
| *
*The classical angular momentum square $S^2$ should be a continuous variable, and identified with the quadratic Casimir $$\rho(\hat{S})^2~=~\hbar^2 s(s+1){\bf 1}, $$ in the $s$-representation $$\rho: su(2)~\to~ gl(2s+1,\mathbb{C}), \qquad s~\in~\frac{1}{2}\mathbb{N}_0,$$ of the quantum $su(2)$ Lie algebra.
*Apparently, this is only possible in the double-scaling limit $s \to \infty$ and $\hbar\to 0^+$ such that the product $\hbar s$ is kept finite.
*For a more refined correspondence principle between classical and quantum mechanics, check out the Langer correction, cf. e.g. this Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Does twisting a wire heat it? I was playing with a key chain loop in a (very boring) chemistry class and then I straightened the loop into a wire keeping two end of the loop (now wire) curved so as to easily twist it. It was more or less a S shaped structure of metal with a longer straight part in the middle of S.
On twisting a lot, it started getting hotter. Why did that happen?
It was a circular cross section wire and I do not exactly know which metal, if it helps.
| I believe the heat is due to plastic deformation, as you repeatedly twist the wire. So what is plastic deformation?
When the metal is strained no further than $A$, its response is purely elastic and the wire will return fully to $O$. But if the deformation exceeds $A$, the deformation will be plastic and the wire will only restore itself to $C$.
As Michael Seifert noted in his answer, the work done by these repeated plastic deformations is then converted to heat and the wire heats up slightly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How does the LHC separate the protium isotope to have only protons for the collision? I am preparing a presentation for my physics class about the LHC and the following question arose:
Every text about the LHC says that it collides protons from a gas of hydrogen whose electrons were previously taken away.
Can collisions be achieved with any hydrogen isotope or is it only protium that is being used?
If so, how is protium separated from the other isotopes?
| Given their charge and mass, as soon as you start accelerating particles around a loop with a given magnetic field to deflect them, only particles with the correct mass/charge ratio survive. In effect you have built a giant mass spectrometer - other isotopes of hydrogen are too heavy, and the Lorentz forces are insufficient to deflect them down the tunnel.
As was pointed out by @DMcKee, the process of extracting the protons includes bends in the injector - any particles with the wrong Q/m ratio will be eliminated there, before making it into the main accelerator. You can see that in this diagram (from https://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/images/complex/Cern-complex.gif)
An of course the first part of the acceleration happens in a LINAC. Although it is a straight line, it selects for the right particles as the RF frequency (and the spacing of the acceleration stages) is tuned for a specific Q/m ratio. So anything that is not a proton will almost certainly never make it to the first bend - anything that did, would "skid out" at that point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Variation of Gauss Bonnet Invariant I am trying to do the variation of Gauss Bonnet Invariant, and the Gauss Bonnet Invariant is:
$G$=$R^2$+$R_{abcd}$$R^{abcd}$-$4R^{ab}$$R_{ab}$
The variation of $G$ is:
$\delta$$G$=$2R\delta$$R$+ $\delta($$R_{abcd}$$R^{abcd}$)-$\delta$$(4R^{ab}$$R_{ab}$)
I am having problem in doing the variation of $\delta($$R_{abcd}$$R^{abcd}$).
Can anyone please give me the solution in detail? I have the answer but I don't know how to solve it.
| You will need
\begin{equation}
\begin{split}
\delta \Gamma^c_{ab} &= \frac{1}{2} \left( \nabla_a h_b{}^c + \nabla_b h_a{}^c - \nabla^c h_{ab} \right) ~, \\
\delta R^a{}_{bcd} &= \frac{1}{2} \nabla_c \nabla_d h_b{}^a + \frac{1}{2} \nabla_c \nabla_b h_d{}^a - \frac{1}{2} \nabla_c \nabla^a h_{db} -\frac{1}{2} \nabla_d \nabla_c h_b{}^a -\frac{1}{2} \nabla_d \nabla_b h_c{}^a + \frac{1}{2} \nabla_d \nabla^a h_{cb} ~, \\
\delta R_{ab} &= \frac{1}{2} \left( \nabla_c \nabla_a h_b{}^c + \nabla_c \nabla_b h_a{}^c - \nabla^2 h_{ab} - \nabla_a \nabla_b h \right) ~, \\
\delta R &= - R_{ab} h^{ab} + \nabla_a \nabla_b h^{ab} - \nabla^2 h ~, \\
\delta \det g &= h \det g ~. \\
\end{split}
\end{equation}
where $h_{ab} = \delta g_{ab}$ and $h=g^{ab} h_{ab}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the equation of relative motion for two objects moving in straight lines? If two objects, A and B, are moving in the same direction along straight lines in a plane, they might be diverging, converging or moving in parallel.
If we wish to describe B's motion with respect to A, what is the equation of motion?
For example, imagine that A is moving at 10 knots along the line described by the parametric equation:
x = 30t
y = 20t
and B is moving at 9 knots along the line described by the parametric equation:
x = 35t
y = 10 - 15t
what is the motion of B with respect to A? In other words, if we hold A to always be at the origin, what would be the parametric equation (and/or non-parametric equation) for B's motion?
I guess the shape will be a parabola or hyperbola, but am not sure how to compute it.
| We first rewrite the velocities of the boats so that they're in terms of knots.
Currently, we have written:
$$(x_A,y_A) = (30t, 20t)$$
$$(x_B,y_B) = (35t, 10-15t)$$
So that the velocities are
$$\vec{v_A} = (30, \;\;\;\;\;20)$$
$$\vec{v_B} = (35, \;\;-15)$$
We need to rescale them so that $\vert \vec{v_A} \vert= 10$ and $\vert \vec{v_B} \vert= 9$ as posed in the question.
$$\vec{v_A} \mapsto 10 \;\hat{\vec v}_A = \frac{10}{\sqrt{30^2 + 20^2}} (30, 20) \quad = \quad \frac{10}{\sqrt{13}}\left(3, 2 \right)$$
$$\vec{v_B} \mapsto 9 \;\hat{\vec v}_B = \frac{9}{\sqrt{35^2 + 15^2}} (35, -15) \quad = \quad \frac 9 {\sqrt{58}} (7, -3)$$
There. Now the velocities are written so that their time derivatives $d/\mathrm{d}\,t$ give the speed in knots as desired.
The relative velocity is given simply by the velocity of $A$ subtracted from the velocity of $B$:
$$\vec{v_{B\mathbf r A}} = \left( \left( \frac{63}{\sqrt{58}}-\frac{30}{\sqrt{13}} \right), \quad \left( -\frac{27}{\sqrt{58}} - \frac{20}{\sqrt{13}} \right) \right) \; \mathrm{knots}$$
Note that the translation of coordinates doesn't affect relative velocity.
That the relative velocity is given by this subtraction of velocities follows from the linearity of the derivative operator, and the equation for relative velocity:
$$\vec v_{B \mathbf r A} = \frac{d}{dt} \big((x_B,y_B) - (x_A,y_A)\big)$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
How is Liouville's theorem important to statistical mechanics? I have come across Liouville's theorem in the first chapter of many statistical mechanics textbooks, still I don't quite get how it is important to statistical mechanics.
How is it related to statistical mechanics? How can it be applied to the study of ensemble theory or other area in statistical mechanics?
| It is extremely important if you want to understand the reason behind the irreversibility paradox. Basically, classical mechanics are microscopically reversible. Then, why don't we ever see macroscopic reversibility for certain experiments? Why can we see a glass shatters but pieces do never join back into the original glass? Liouville's theorem explains it.
It states that the (hyper)volume in the phase space is conserved.
Let $|\Gamma|$ be the size of the set, namely $\iiiint_V \prod_{i=1}^{N} dq_i dp_i$ Where $q_i$ are the generalized coordinates and $p_i$ their momenta.
Think for example in the case of a vessel with a wall in the middle. One side is full of gas while the other half is empty (situation 1). Now let's take the wall away so that the gas can expand to the other side (situation 2).
Now there's twice the available space, thus the corresponding coordinate can take twice its values. The other coordinates remain the same and momenta are restrictionless again.
This is okay for one particle. Now, for the unthinkable number of particles in 1 mol of gas ($\sim 6\cdot 10^{23}$ particles), twice that number is just huge.
So the "size" of situation 1 is so small against situation 2 that it is totally negligible.
Conclusion: macroscopic reversibility in those cases IS possible, it is just not probable at all, since $\Gamma_1|/|\Gamma_2|\ll$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Direction of dipole moment How do you find out the direction of the dipole moment in a charge distribution? For example a sphere with charge density $\rho$ in northern hemisphere and $-\rho$ at southern hemisphere? How can you think about the direction in general?
| Generally speaking, the dipole moment of a neutral charge distribution points in a direction that goes from the places with negative charge to the places with positive charge. Moreover, if the distribution has any sort of rotational symmetry axis (either continuous or discrete) then the dipole moment needs to lie along that axis.
To go beyond that, then you need to step away from hand-waving statements and actually calculate the dipole moment through its definition,
$$
\mathbf p = \int \mathbf r \, \rho(\mathbf r)\mathrm d\mathbf r,
$$
normally through direct and explicit integration.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Planck length at relativistic speeds? I'm currently in high school so sorry if the answer to this question seems obvious but I’m only just learning about this stuff. I’ve been learning about special relativity, in particular length contraction and time dilation. I was wondering, if the Planck length is the smallest possible observable length, then what would an observer who is travelling at relativistic speeds measure the Planck length to be? Would it be the same or would he observe a smaller length?
| This is a very good question, and serious physicists such as Lee Smolin have been wondering about it. According to Special (and General) Relativity any inertial observer should get the same value for the Plank length in terms of its own units. So the Planck length calculated in a frame that I see moving should be the same number of that frame’s units as the Planck length I calculate in mine. But since the moving frame has shorter units, its corresponding Planck length should seem shorter than a Planck length is to me.
One thing to remember, though, is that the Planck length is not a property of any actual object but rather just the scale at which the effects of quantum gravity should become apparent. So one interpretation of the contraction is that for a traveler moving relative to me, the effects of quantization of gravity on objects in the traveler's frame become apparent to me at longer scales than they do to the traveler. (And conversely, the traveler sees quantum effects on me on length scales at which I still don’t notice them.)
This is one of the many things that will have to be resolved by a proper quantum theory of gravity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 1
} |
Heat to work or thermal energy to work? A system consists of different forms of energy like thermal energy, mechanical energy, chemical energy, nuclear energy etc. If these energies are to be transferred to another system (call it system 2), it can either be done as heat or work (or mass but here I take system approach) and again at the other system (system 2) it will be held as (change the) one of the forms of energy (thermal, mechanical chemical etc).
So when the second law implies that heat cannot be completely converted to work is it actually implying that thermal energy of a system cannot be completely transferred as work to another system? or does it mean energy that is transferring as heat cannot be changed to transfer-of-energy-as-work mid transfer?
It cannot be about the quality of energy because heat and work are not energy they just imply transfer of energy.
| @Chester Miller So if energy is transferred to a system(as heat or work) and there is no transfer of energy from this system, then the energy transferred to the system will be saved as one of the forms of energy of the system ie increase the energy of the system(which may be thermal energy, mechanical energy, chemical energy etc). If the energy transfer was as work then this can be completely be saved as either mechanical energy of the system or thermal energy of the system (we won't consider other energy of the system). But if the energy transfer was as heat then this can be saved completely as the thermal energy of the system but the energy transferred cannot be completely be saved as mechanical energy. Is this the case or can the complete energy transferred to the system as heat can also be saved as mechanical energy of the system?(as mechanical energy can be completely be changed to work easily)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Convention mostly used for Fourier transform I know that mathematically it doesn't matter what sign of $i$ we use to Fourier-transform a wavefunction from real- to momentum-space and vice versa, as long as we consistently change the sign when transforming it back to its original space. But I've seen many text-books (lecture notes) use $-i$, i.e.
$$ \phi(\vec{k}) = N^{-1} \int_{V_r} \psi(\vec{r})\ \ \exp(-i\vec{k}\cdot\vec{r})\ \ d^3r\tag{1}$$
to FT from the real- to momentum-space, and $+i$, i.e.
$$ \psi(\vec{r}) = N^{-1} \int_{V_k} \phi(\vec{k})\ \ \exp(+i\vec{k}\cdot\vec{r})\ \ d^3k\tag{2}$$
to FT from the momentum-space back to real-space. I might have missed something from my quantum mechanics course, but is there some physical reason(s) behind this convention?
| A wave that propagates in the $\vec{k}$ direction is given by
$$\left<\vec{r}\Big|\vec{k}\right>=\frac{1}{\left(2\pi\right)^{\frac{3}{2}}}e^{+i\vec{k}\cdot\vec{r}}$$
and thus it is common to decompose a function as
$$\psi\left(\vec{r}\right)=\frac{1}{\left(2\pi\right)^{\frac{3}{2}}}\int\tilde{\psi}\left(\vec{k}\right)e^{+i\vec{k}\cdot\vec{r}}{\rm d}^{3}k$$
If you add a minus sign to the exponent, then $\tilde{\psi}\left(\vec{k}\right)$ is the amplitude of a wave propagating in the $-\vec{k}$ direction.
EDIT 1: As @ZeroTheHero has pointed out, I've implicitly assumed that the time dependent component is
$$T\left(t\right)=e^{-i\omega t}$$
such that a wave traveling in the positive direction is given by
$$\psi\left(\vec{r},t\right)=\frac{1}{\left(2\pi\right)^{\frac{3}{2}}}e^{i\left(\vec{k}\cdot\vec{r}-\omega t\right)}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to represent a axisymmetric, stationary metric in a coordinate independent way? A classic example of a stationary, axisymmetric metric in GR is the Kerr metric. In Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ it is obvious that the metric is independent of $t,\phi$ and so is stationary and axisymmetric.
Now, often in GR we want to work in a covariant, coordinate independent way and just deal with 4-vectors, tensors etc. In this case the metric is just represented by $g^{\mu \nu}$.
My question is, is there a way to enforce stationarity and axisymmetry onto this metric tensor $g^{\mu \nu}$, without reference to a coordinate system? For instance, can this be done with Killing vectors?
| The definition of stationary and axialsymmetric normally comes after one has specified a local coordinate expression, (writing $g_{\mu\nu}$ is still a local coordinates expression, since it has indices).
However as you might know, an abstract symmetry of a metric is associated to a Killing vector as has been pointed out. The fact that you can identify such Killing vector with stationarity or rotations, has to do with extra requirements on these Killing vector fields such as forcing it to be "timelike" or satisfying some algebra of rotations ($U(1)$ plus spacelike for the axial example or $SO(3)$ plus spacelike for the spherical case). Then you could work with vector fields that are Killing vectors and that satisfy some additional property you are interested in, perhaps this extra properties are what you should look into, if you don't want to specify a particular frame.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Wrong sign in Conformal Casimir The quadratic conformal Casimir in $d$-dimensional Euclidean space is given by
\begin{equation}
C = \frac{1}{2}L_{\mu \nu}L^{\mu \nu} - D^2 -\frac{1}{2}\left(P^\mu K_\mu + K^\mu P_\mu \right)
\end{equation}
as given for example in the beginning of lecture 6 here http://pirsa.org/C14038.
Since there is an isomorphism between the conformal group and $SO(d+1,1)$ it should be possible to get this result by simply expanding $\frac{1}{2} M^{ab}M_{ab}$ with the identifications (DiFrancesco Eq. (4.20))
\begin{equation}
\begin{split}
M_{-1,0} &= D \\
M_{-1,\mu} &= \frac{1}{2} \left( P_\mu -K_\mu \right) \\
M_{0,\mu}\ &= \frac{1}{2} \left( P_\mu +K_\mu \right) \\
M_{\mu \nu}\ &= L_{\mu \nu}
\end{split}
\end{equation}
and $\eta_{ab}= \mathrm{diag}(-1,1,...1)$. However absolutely every time I attempt to do this calculation I get
\begin{equation}
C = \frac{1}{2}L_{\mu \nu}L^{\mu \nu} - D^2 +\frac{1}{2}\left(P^\mu K_\mu + K^\mu P_\mu \right).
\end{equation}
There are many different sign conventions out there but I don't think that's the problem because my wrong Casimir really does not commute with the elements of the algebra.
I know it's not the most exciting calculation to do but I would eternally grateful to whoever can point out where the flaw lies.
| I get
$$
C_2=\frac{1}{2}L_{\mu \nu}L^{\mu \nu} + D^2 +\frac{1}{2}\left(P^\mu K_\mu + K^\mu P_\mu \right)
$$
In Euclidean signature
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Comparison between two flavor neutrino oscillation and a system of up-spin and down-spin states of an electron? In the system of up-spin and down-spin states of an electron, we can write a general state of electron at time $t$ as
$$\left|\psi(t)\right>=a\left|\uparrow\right>+b\left|\downarrow\right>,$$
where $\left|\uparrow\right>$ is up-spin state and $\left|\downarrow\right>$ is down-spin state. We also say that $e^{-}$ can oscillate between these two spin states (correct me if I am wrong). Now coming to neutrino oscillation phenomenon, we can write a general state of the neutrino as
$$\left|\Psi(t)\right>=a\left|\nu_e(0)\right>+b\left|\nu_{\mu}(0)\right>.$$
Now, my question is, why do we need mass eigenstates ($\nu_1$ and $\nu_2$) to describe neutrino oscillations, while we don't need such type of thing to explain the oscillation (again I would say, correct me if I am wrong) between $e^{-}$'s up-spin state and down-spin state?
| 1) the spin states for the electron are two possible eigen values +1/2 or -1/2 .
2) the neutrino states involve two masses for the neutrino
Spin is a conserved angular momentum variable.
Mass is not a conserved quantity.
A free particle carries a number of conserved quantum numbers and momentum and energy and angular momentum are also conserved. When leaving an interaction vertex , from angular momentum conservation it will be in one of the two spin states, and cannot change it until it interacts again. The state you have written describes an electron only if angular momentum is not an eigenstate of the interacting system under consideration.
Because mass is not a conserved quantity but only the total energy momentum vector leaving the interaction vertex, and the masses of neutrinos are not associated with charges ( charge is a conserved quantum number) oscillations can happen.( It is related mathematically to how virtual particles have to conserve quantum numbers but the exchanged particle is off mass shell).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Lorentz transform of force
If a particle of mass $m$ and velocity $v$ is moving due to a constant electric force what would the force be in the the frame where the particles velocity is 0?
To try and solve this I used the four force and did a Lorentz transform of the four momentum. However I got different answers in each component of the force and if this scenario was taken as one dimensional I got no change in the force. So I was wondering how to find a equation relating the new force to the old force.
| The Lorentz force must be transformed in the same way as other forces in special relativity.
Avoiding a tensor treatment, you can say that
$${\bf F'} = {\bf F_{\parallel}} + \frac{1}{\gamma}{\bf F_{\perp}}, $$where $\gamma$ is the usual Lorentz factor and the subscripts refer to the components of the Lorentz force in the rest frame that are parallel and perpendicular to the relative velocity between the rest frame and moving frame and the "unprimed" frame is the rest-frame of the particle.
However, I don't understand your question. A particle which is subject to a constant force will not be moving with a constant velocity except at some instantaneous time. Are we meant to assume that the velocity arises only from the acceleration due to the electric field so that we can assume that the electric field and velocity are parallel? If so, then you can see from my equation above that the Lorentz force on the particle is unchanged. The reasoning is that the magnetic field, that must be present in the rest frame of the particle, exerts no force since ${\bf v} \times {\bf B}=0$ and ${\bf E_{\parallel}'}={\bf E_{\parallel}}$. Any component of the electric Lorentz force that is in fact perpendicular to ${\bf v}$ in the primed frame will be increased (in the absence of a magnetic field in the primed frame) by a factor of $\gamma$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why these patterns form in captured image while zooming?
This is a gif format video that shows zooming of an image of computer LCD screen which i captured using my mobile phone. You can see that some fringes are forming and disppearing and hence some patterns are forming while i zoon in or out. How will you explain this phenomena?
1. I could also see same kind of pattern formation in photograph of a mobile screen also
2. Try zooming this image ,check wether you can see it .
3.when i tried to reduce the image file size to upload it here i could see that reducing file size less than a boundary give away that effect. So i am not sure you will see the effect in above image thats why i used a gif video of zooming
4.i should also check wether i get the same effect when trying to zoom original picture using computer( i will update this when done)
If more details needed please ask me.
Whatever ,How is this happening ?
| ERROR: type should be string, got "\nhttps://xkcd.com/1814/\nApparently this isn't enough text so here is a real explanation\n" | {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Lattice and basis vectors for a NaCl structure I am supposed to obtain the selection rules of a NaCl lattice considering a rhombohedral set of lattice vectors but I am not getting any valid results. My guess is that I am not choosing the basis correctly.
I define my FCC lattice vectors as
$a_1=\frac{a}{2}(1,0,1)$, $a_2=\frac{a}{2}(-1,0,1)$ and $a_3=\frac{a}{2}(0,1,1)$
and my basis as
Na$(0,0,0)$ Cl$(\frac{1}{2},\frac{1}{2},\frac{1}{2})$
which results in no extinctions since
$f_{Na}+f_{Cl}e^{i\pi(h+k)}\neq0$
Is my choice of basis wrong and/or is it there something else I am not taking into account?
| You should ask this on chemistry forum.
If I remember it correctly, I think NaCl is a cubic lattice where each chlorine atom is surrounded by 6 sodium atoms at equal distance and vice versa. So with this information, we can start like:
The Na basis atom is at (0,0,0) and Cl basis atom is at (0.5,0.5,0.5) or vice versa. While you can reproduce the lattice with Cl at (0.5,0,0), that would mess up the primitive cell. The second basis point has to be inside the FCC unit. Of the 4 octahedral sides that can be the second basis point (3 of the edge half sites and the body center site), only the body center lies within the primitive unit FCC cell. Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Running vs. walking in slippery condition We are experiencing warmer weather than normal, which is causing the snow to melt and re-freeze daily. This has led to very slippery conditions.
A few years ago, I was running in similar conditions, and I got to an area where the ice started to feel more slippery. So my reaction was to stop running and start walking. To my surprise, it was harder to walk in the icy conditions than it was to run: it felt as I was slipping with every single step.
Today, in light of the conditions, I tried the experiment again, and my sensations seemed to confirm what I felt the previous time.
Is there any physical basis to what I felt? Is it possible that running on ice produces a more stable footing than walking does?
If it makes any difference, the ice is of course not smooth, and one is usually slipping on the "slopes" of small creases.
| https://www.sciencedaily.com/releases/2011/03/110324103610.htm
"Biomechanics researchers Timothy Higham of Clemson University and Andrew Clark of the College of Charleston conclude that moving quickly in a forward, firm-footed stance across a slippery surface is less likely to lead to a fall than if you move slowly. Approaching a slippery surface slowly hinders the necessary task of shifting the center of mass forward once foot contact is made."
"The key to avoiding slips seems to be speed and keeping the body mass forward, slightly ahead of the ankles after the foot contacts the ground."
"Once the knee passes the ankle during contact with slippery ground, slipping stops."
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What's the purpose of shorting the base and collector of a transistor in current mirrors? I often see this diagram of a current mirror (as shown below).
As far as I know, the purpose of a current mirror is the ensure that the collector current for both transistors are equal.
This can simply be achieved by making sure that their base-emitter voltage is the same. This can be done without shorting the base and collector of the left hand side transistor... Is shorting it redundant in any ways?
| Shorting collector to base is NOT redundant, it serves the useful purpose of making conduction in the two transistors similar. In particular,
the construction of a transistor includes a thin base region between emitter and collector, and that base region has an effective electrical
resistance. Forcing all current in one transistor through
the base creates a voltage drop in the base spreading resistance
(often symbolized Rbb), while the (second) transistor takes
only a small fraction of its emitter current through that resistance.
Ebers-Moll ideal transistor:
$$I_c = \alpha I_{sat} \exp {q_e * V_{be} \over { k * T }}$$
and with base resistance
$$I_c = \alpha I_{sat} \exp ({q_e * (V_{be} -I_b * R_{bb})\over { k * T }})$$
In the case of base-collector connection, Ib is a small fraction of the collector current. If you don't connect the collector of the leftmost transistor, though, Ib is identical to the emitter current (which means it is larger than the collector current). To match the two transistors' operation, you want the base-collector connected.
The open-collector use of B-E diode also changes the 'alpha' factor to exactly one, but that's less important.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does salt affect the boiling time of water? If I have 1 cup of water on the stove and another cup of water with a teaspoon of salt.
would the salt change the boiling time of the water?
| Yes and no.
It will not change the boiling time of water. If you add salt, then it's not water anymore, it is now a new solution (salt + water). It will change the boiling point of the solution. Because that solution now has a different boiling point, if nothing else changes, it will take a different amount of time to boil.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
How does friction increase energy of a system? I had this doubt while thinking through a question about centre of mass. Consider a system, consisting of a man standing on one end of a plank which rests on a frictionless surface. Now the man starts running towards the other end of the plank(friction is present between the man and the plank). Once he reaches the end of the plank he jumps down and both the man and the plank keep on sliding endlessly on the surface with equal and opposite momentum. Although the net momentum is still zero, both of them now have some velocities and thus the kinetic energy of the system has increased. Therefore work is done on the system by friction. This has prompted the following ques. in my mind :-
1)How is the kinetic energy of the system defined ? In this case if we add the individual kinetic energies of the 2 bodies, we get a net increase in the KE of the system. However, if we take it as $\frac{1}{2}m_{sys}v_{cm}^2$ the KE will still be zero as velocity of centre of mass is zero.
2)If friction is doing work on the system, which energy is being converted into mechanical energy ? As this is an isolated system(assuming no form of heat exchange is present between the bodies and the surrounding) the total energy should always remain conserved. I thought that it must be the tiny deformations caused in the bodies by friction resulting in change of potential energy which is converted into KE. Is this right ?? Or will the bodies get cooler to keep the energy conserved ??
| There was no work done by friction in your example. Work is force by displacement. Friction is between man's feet and the board they contact. During this contact, each foot does not move relative to the board. Therefore the displacement is zero and so is work by friction. Actual work here is done by man's muscles by converting the chemical energy of food to mechanical energy.
The kinetic energy is defined the way you stated. The sum of kinetic energies of all bodies is non-zero, but the kinetic energy of the center of mass is zero. There is no contradiction here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why are high electronegativity atoms found in the periodic table's upper right corner? Looking at a graphical representation of electronegativity in the periodic table reveals a pattern that, noble gases aside, electrognetivity increases as you move toward the upper right hand corner of the table. What causes this?
The electronegativity seems to anti-correlate with the empirical atomic radii, which doesn't correspond perfectly with the calculated radius, so I would imagine that the explanation of these phenomena are shared, for what it's worth.
| Mendeleev's periodic table of elements is based on the number of protons in an atom. Also, in an electrically neutral atom,
$$no. \ of\ electrons=no. \ of \ protons$$
So for stability, octet configuration is sought by the atom. For electro-negative elements, since they have a higher number of electrons in the ultimate shell, they tend to gain an electron to form the octet. And since according to Mendeleev's arrangement, atoms with more number of electrons in the ultimate shell are placed on the R.H.S.
But, the most electronegative are at the upper-right, because they tend to be smaller atoms by size and the nucleus exerts a greater pull on the outer electrons and hence can attract electrons to form octet configuration more easily in comparison to larger radii atoms, making them more electronegative.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why can the Klein-Gordon field be Fourier expanded in terms of ladder operators? Using the plane wave ansatz
$$\phi(x) = e^{ik_\mu x^\mu}$$
the solution to the Klein-Gordon equation $(\Box + m^2) \phi(x) =0$ can be written as a sum of solutions, since the equation is linear and the superposition principle holds, as
$$\phi(x) = \sum_{{k}} \left( Ae^{ik_\mu x^\mu} + Be^{-ik_\mu x^\mu} \right).$$
How does one find the coefficients? More exactly, why does it turn out they are the annihilation and creation operators with the factor $1/\sqrt{2E}$?
The various books and sources I've checked just confused me even more. Peskin and Schroeder just plug in the integral equation (Fourier modes) by analogy with the harmonic oscillator solution. Schwartz gives a very strange reason that the energy factor is just for convenience. In Srednicki the author writes it as $f(k)$ without an explicit form. In Mandl and Shaw, they just state the equation without any justification.
My best guess is that those come from the quantization process, but how does one do it in this case explicitly?
| The action functional of the real scalar field is:
$$ \mathcal{A} = \frac{1}{2} \int\mathrm{d}^4 x \left(\partial_a \phi \partial^a \phi - m^2 \phi^2 \right) = \int \mathrm{d}t \int \frac{d^3 \mathbf{k}}{(2\pi)^3} \frac{1}{2}\left[ |\dot{q}_{\mathbf{k}}|^2 - \omega^2_{\mathbf k}|q_{\mathbf k}|^2 \right]$$
where the second equality refers to an infinite number of harmonic oscillators, related to the scalar field by a spatial Fourier transform:
$$ \phi(t,\mathbf x) = \int \frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^3}\;q_{\mathbf{p}}(t)e^{i\mathbf{p}\cdot\mathbf{x}}$$
Note that we're operating in the Heisenberg picture, where only operators evolve in time and not states. The quantization problem of the field has now been reduced to quantization of harmonic oscillators.
Expressing the harmonic oscillator in terms of creation and annihilation operators, which is a different basis:
$$ q(t) = \frac{1}{\sqrt{2\omega}}\left[a(t) + a^{\dagger}(t) \right] $$
The time evolution of the creation and annihilation operators can be found by solving Heisenberg's equation of motion, yielding the result for the harmonic oscillator:
$$ q_{\mathbf p}(t) = \frac{1}{\sqrt{2\omega_{\mathbf p}}}\left[a_{\mathbf p}e^{-i\omega_{\mathbf p}t} + a^{\dagger}_{-\mathbf p}e^{i\omega_{\mathbf p}t} \right] $$
where the $-\mathbf p$ subscript in the second operator is introduced for later convenience. Substituting this into the field expression as a Fourier transform:
$$ \phi(t,\mathbf x) = \int \frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^3}\frac{e^{i\mathbf{p}\cdot\mathbf{x}}}{\sqrt{2\omega_{\mathbf p}}}\left[a_{\mathbf p}e^{-i\omega_{\mathbf p}t} + a^{\dagger}_{-\mathbf p}e^{i\omega_{\mathbf p}t} \right] = \int \frac{\mathrm d^3{\mathbf p}}{(2\pi)^3}\frac{1}{\sqrt{2\omega_{\mathbf p}}}\left[a_{\mathbf p} e^{-ip_{\mu}x^{\mu}} + a_{\mathbf p}^{\dagger}e^{ip_{\mu}x^{\mu}}\right]$$
by flipping the sign of $\mathbf p$ in the second term.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Is Heisenberg's Uncertainty Principle applicable to light? Can we apply the Uncertainty Principle to light? If so wouldn't it violate it because we just need to find the position of light, since we would already determine it's momentum(from the wavelength of light used)?
| Using wavelength you can find only the magnitude of momentum $\sqrt{P_x^2 + P_y^2 + P_z^2}$. For finding its direction you will need its wavevector $\vec{k}$. Heisenberg's Uncertainty principle applies to each component of momentum and the corresponding component of position, for example, you can't find simultaneously $P_x$ and $x$. Heisenberg's Uncertainty Principle in vector form: $\Delta\vec{P}.\Delta\vec{r} =3\frac{h}{4\pi} $
There is a nice demonstration of this in an experiment of diffraction of light through single slit in this video by Veritasium: https://www.youtube.com/watch?v=a8FTr2qMutA&t=4s
In the experiment, when the light is allowed to pass through a thin slit, it's position is being confined to the slit. So as the slit becomes smaller and smaller it's position gets confined to a smaller region of space. Now, Heisenberg's principle has to work, so its momentum spreads out in different directions (which is uncertainty), giving us the pattern of diffraction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Momentum of a photon when considered as a particle According to dual nature of light, it is said to have both particle as well as wave nature. When we think of it as a wave, its momentum can be found out from
De Broglie's equation i.e λ = h/mv, provided we know its wavelength.
But how do we calculate the momentum of a photon when we think of it as a particle?
| From relativity, we know that the energy-momentum relation is given by:
$$
E^2 = (pc)^2 + (mc^2)^2
$$
We can see that when $m\rightarrow0$, $E = pc$. The energy of the photon can be found using $E=hf$ and you can solve for $p$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the neutrino oscillation probability formula independent of the Majorana phases? Is there a simple way to understand why the neutrino oscillation probability formula is independent of the Majorana phases?
| I assume you mean the routine formula, for Greek flavor indices and Latin mass eigenstate indices,
$$
P_{\alpha\rightarrow\beta}=\left|\left\langle \nu_\beta(t)|\nu_\alpha\right\rangle \right|^2=\left|\sum_i U_{\alpha i}^{*}U_{\beta i}e^{ -i m_i^2 L/2E }\right|^2.
$$
Now the standard result of Bilenky et al 1980 extends the PMNS matrix to
$$
U\mapsto U P,
$$
where $P\equiv \operatorname {diag}(1,\exp (i\alpha_{21}/2), \exp(i\alpha_{31}/2))$. One calls these two αs Majorana phases, all the remaining phases of the diagonalized Majorana masses having been absorbed into the conventional-looking "CP-phase" in U. It's just a name.
Now, for this formula, visibly, the sum over i involves, for each i respective factors of $\exp(-i\alpha_i + -im_i^2 L/2E+i\alpha_i)$, involving
P* , P, and the propagator term. But the respective P* and P terms in the exponents of this diagonal cancel each other, and, presto!, the propagator is not modified, $=\exp(-im_i^2 L/2E)$, and all trace of the two Majorana phases is gone in this particular formula.
Note this is predicated on the highly restricted and peculiar nature of "conventional" neutrino oscillation experiments, that cannot measure real Majorana processes, violating lepton number, $\Delta L=2$, like neutrinoless double β decay --experiments which can do that, like CUORE, can also, in principle, access these extra phases, but are still languishing empty-handed. Bilenky et al, in 1980, also fantasize about possible right-handed extra interactions, etc... that might allow access to these phases in oscillation experiments.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Physical meaning of gauge choice in electromagnetism In electromagnetism, it is often referred to gauges of the electromagnetic field, such as the radiation or Coulomb gauge. As far as I know, the definition of a gauge helps us to redefine the problem in terms of a vector potential and a scalar potential that, since we have some freedom in choosing them, can be chosen in cleverest way it is possible for the given problem.
Here comes my question: is the choice of the gauge a mere mathematical simplification of the given problem? Does this choice have a physical meaning?
My troubles are actually in understanding the physical meaning of this choice of the gauge and what will change if I choose a different gauge.
| The gauge choice or another has the same physical importance as choosing a inertial reference frame or another... the possibility of doing that gets you a lot of truly profound physical implications (by Noether theorem, for example), so both answers are yes, in some sense.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
For all practical purposes can light be bent (without the help of gravity) or just reflected? For example, if a single beam of light was directed directly at the tangent of a semi circular mirror, would it be considered bending or redirecting many times to form a near circular pattern? When I say bend I mean in a curved trajectory, not at an angle.
| a beam of light can be bent through an angle by sending it through a wedge-shaped piece of glass, requiring neither gravity nor reflection. this phenomenon is called refraction and can be studied in detail on wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What if cosmological constant was zero? Physicists always ask why the cosmological constant is not exactly zero!
I would ask here, what if cosmological constant was zero? The universe wouldn't expand and matter would exert gravitational force and shrink the universe into a big crunch!
So, why physicists want the constant to be zero then? I must have missed something here!
Can cosmological constant be zero since we see the universe already expanding? How would the universe support life further as some claim?
| To add to Anders Sandberg's answer, the Friedmann equations are really the crucial piece of the puzzle here. These equations assume General Relativity, as well as homogeneity and isotropy (i.e. the universe looks the same in every direction + looks the same at every point). Manipulating the Friedmann equations yields a critical density
$$\rho_c = \frac{3H^2}{8\pi G}$$
The big crunch only happens if the matter density of the universe is larger than this. We observe a matter density that's significantly less than this, which means that even if there were no cosmological constant, the universe will keep expanding. Gravity will slow the expansion down, but it'll never slow to the point where the universe reverses and starts contracting.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Reason for 6π factor in Stokes' law According to Stoke's law, the retarding force acting on a body falling in a viscous medium is given by $$F=kηrv$$ where $k=6π$.
As far as I know, the $6π$ factor is determined experimentally. In that case, how is writing exactly $6π$ correct since we obviously cannot experimentally determine the value of the constant with infinite precision?
| If you have read that the 6π coefficient is determined experimentally, then you would also have read that this applies to spherical objects with very small Reynolds numbers in a viscous fluid - Stokes' law is derived by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.
We cannot determine the value of any constant with infinite precision but we can often determine them to a level of precision where the effect of the uncertainty becomes negligible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
} |
How do partons' spin/orbital angular momenta contribute fractionally to the nucleon spin structure? Experimentally it is found that the spin and orbital angular momenta of quarks and gluons contribute fractionally to the total nucleon spin $1/2$, as in:
$$\frac{1}{2} =\frac{1}{2} \Sigma_q + \Sigma_g + L_q + L_g$$
But how do these contributions break into fractional parts of $1/2$ if the individual partons themselves each have quantized angular momenta in units of $1/2$? Or, what is the technical meaning of the 'contributions' in this equation, if it is not so naive?
| The quark spin is (almost*) unambiguous, but the other three contributions to the total angular momentum turn out to be gauge dependent. Except for certain special projections in certain momentum limits, it is not possible to observe the gluon spin, gluon orbital angular momentum, and quark orbital angular momentum separately. Performing a gauge transformation mixes these terms together.
However, any sum of operators that obey the angular momentum commutation relations itself obeys the same commutation relations; hence, the sum represents a form of angular momentum. The total angular momentum is thus quantized in the way that all angular momenta are, in integer or half-integer multiples of $\hbar$. Since the overall strong interaction Hamiltonian is rotation invariant, its eigenstates may be made simultaneous eigenstates of the total angular momentum operator. A nucleon is a strong eigenstate with total angular momentum $\frac{1}{2}$. A $\Delta^{++}$ has total angular momentum $\frac{3}{2}$. A $\pi^{0}$, made up of only two quark field quanta (a quark and an antiquark) has total angular momentum $0$.
*It turns out that quantum corrections (the chiral anomaly) even make the total quark spin scheme dependent. The spin depends on the extent to which the gluon field is polarized.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Matter effects in neutrino oscillation The neutrino oscillation probability in matter is given as:
where
Now what I do not understand is that "As the energy increases, the probability
of oscillation within the sun through the matter effect increases, so the survival probability decreases". I have read this (page 28) in couple of books but unable to cross check it through formula because as I try to increase the energy, both the $sin^2$ decreases.
So what's going wrong here?
| The transition probability $P_{\nu_e \rightarrow \nu_{\mu}}$ is indeed decreasing with neutrino production energy, i.e., the survival probability $P_{\nu_e \rightarrow \nu_{e}}=1-P_{\nu_e \rightarrow \nu_{\mu}}$ is increasing. Why? The first sine term ($\sin^2 2 \theta_m $) in the oscillation equation (your first equation) determines the amplitude of the oscillations. The limit of that term as $V \rightarrow \infty$, i.e., as energy or density increases towards infinity, is zero:
$$ \begin{align}
\sin 2 \theta_m = \frac{\sin 2 \theta}{ \sqrt{ \left(\Delta V / \Delta m^2 - \cos 2 \theta \right)^2 + \sin^2 2 \theta} } \\
\lim_{V\to \infty} \; \left[ \frac{\sin 2 \theta}{ \sqrt{ \left( V / \Delta m^2 - \cos 2 \theta \right)^2 + \sin^2 2 \theta} } \right] = 0
\end{align}
$$
As $\sin^2 2 \theta_m$ descends towards zero asymptotically with increasing neutrino production energy (and/or production electron number density) above MSW resonance any oscillations in the second sine term, $\sin^2 \left( 1.27 \Delta m_m^2 \frac{L}{E} \right)$, of the oscillation equation will accordingly be reduced in amplitude, going to zero in the limit as well.
The second sine term is not monotonically decreasing (it might appear to be so if only a few samples are calculated), but is rather oscillating rapidly, hence the smeared oscillations in the graph.
The equations you gave should not be used for solar neutrino analysis except at production energies much lower than the MSW resonance.
You can obtain proper equations in the PDG 2018 Review of Particle physics section 14, a free download at lbl.gov.
If the above terse statement of the problem is insufficient you can view an article demonstrating how to graph using the posted equations (with Python code) and how they malfunction at higher energies (with a link to a proper equation and an article describing that) at this link: MSW flavor calculation
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why the unit vector is represented as a partial derivative in GR? Can someone give a good intuitive explanation why we represent the unit vector as a partial derivative in GR and what does it mean?
| We'd like to say that a (unit) tangent vector is a direction on a manifold. But we can only define and distinguish directions because there must be something different about different points on the manifold, that is, we have a non-constant 'testing' function. So, the vector is the direction in which we differentiate functions defined on manifold. Hence, the notation of partial derivative.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Strange interference pattern of light on top of tower, pattern was seen on air. What was it? I was just looking out of window at night when I saw a tower with a light on top. It had a red light.
When I looked at it through my curtains with net on, I saw an interference fringe, one is the main light itself and band of lights on either side of it (like interference of waves).
Although there was no screen, why did I see it? Did the air act like screen, so I saw it? Was it because nets of curtain acted like slits, which produced that pattern? Or is it some other simple diffraction or anything of light? I don't think it is because of other simple reason, because interference pattern was clear.
Any process that can explain this phenomenon?
| What you are seeing is an interference pattern, similar to double slits or diffraction gratings. You can confirm this comparing the light pattern when you are looking straight through the curtain (when the curtain is perpendicular to the line from you to the light) and when the curtain is at an angle. Angling the curtain makes the threads appear closer together, so the interference fringes will spread out more.
For the curious, here's what a far off stop light looks like through motel curtains:
The pattern is easier to see when the source in monochromatic (LEDs and the like).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Photon density in radiation: number of particles in a EM wave Since light spreads out in all directions from it's source, how far it must travel to become individual photons? Can we consider, that is, that as the intensity of light becomes weaker as it spreads, we measure in fact less photons until only a few remain?
Or another way to look at it:
What is the density of photons in a given beam of light or for that matter any electromagnetic wave?
| This is the double slit experiment one photon at a time.
. Single-photon camera recording of photons from a double slit illuminated by very weak laser light. Left to right: single frame, superposition of 200, 1’000, and 500’000 frames.
In 2003, A. Weis and R. Wynands at the University of Bonn (Germany) designed a lecture demonstration experiment of single photon interference from a double slit Light from a laser pointer was so strongly attenuated that at each instant there was only a single photon between the double slit and the detector. The diffracted light was recorded by a single photon imaging camera consisting of an image intensifier (multichannel plate, MCP) followed by phosphor screen and a CCD camera.
Looking at the frame on the outer right, where the continuum of the laser pointer classical interference is seen, and dividing by the number in the first frame, you can get an idea of how many photons are involved.
Alternatively using the poynting vector describing a monochromatic light beam , i.e. the energy per second per cm^2 , and dividing by h*nu, where h is the Planck constant and nu the frequency of the classical beam, i.e. the energy of one photon, you will get the number of photons passing that area per second. If you consider it a point source you could use geometry to tell you how far away the number of photons will fall to the number comparable to the first frame on the left.
Attenuation of a beam can easily be made by controlling the energy supplied to the source after all, as in the double slit experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is a canonical transformation equivalent to a transformation that preserves volume and orientation? We have seen the reverse statement: Lioville's Theorem states that canonical transformations preserve volume (and orientation as well). Is the reverse true?
If I demand a map from the phase space to the phase space to preserve volume, is it necessarilly a canoncial transformation?
I couldn't come up with a counter example, that's why I ask.
| In dimension $2n>2$ they are not equivalent since (for time-independent transformations) canonical is equivalent to $$\sum_{k=1}^n dq^k\wedge dp_k = \sum_{k=1}^n dQ^k\wedge dP_k\tag{1}$$
whereas conservation of oriented volume means
$$dq^1\wedge \cdots \wedge dq^n \wedge dp_1\cdots \wedge dp_n = dQ^1\wedge \cdots \wedge dQ^n \wedge dP_1\cdots \wedge dP_n\:.\tag{2}$$
The former is much more restrictive. The latter only requires that the Jacobian matrix has determinant $1$. Already with $4\times 4$ matrices there are easy conterexamples.
$Q^1 = aq^1$,
$Q^2= b q^2$,
$P_1 = b^{-1} p_1$,
$P_2 = a^{-1}p_2$
where coordinates are over $\mathbb R^4$ and with constants $a,b>0$ satisfying $a\neq b$.
This transformation satisfies (2) but not (1).
Instead, for $2n=2$, (1) and (2) are evidently equivalent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Why is the normal force $(M+m)g$? I am trying to understand the solution to this problem. The problem asks to find F such that m stays fixed relative to M. In the solution, it is mentioned that the normal force for block M is (M+m)g, I don't understand that. I thought it is supposed to be only Mg.
The solution states - The normal force on the first block, M is Mg + u_2*F_bb = (M+m)g.
Normal Def from wiki - is that component of the contact force that is perpendicular to the surface that an object contacts.
Since block M is in contact with 2 surfaces, is that why they are adding the the Mg+ u_2*F_bb?
I think I am just confused about the definition of Normal Force and it's application in this problem.
| I wanted to write this as a comment but it seems I don't have to reputation to do so, so here goes. Since it is given that the masses should be stationary relative to each other, to understand why the vertical normal force exerted on $M$ by the surface is $(M+m)g$, it is probably easier to treat the combination of the two masses as a single object of mass $M+m$, and draw a free body diagram for this object. Gravitational force acting downward is clearly $(M+m)g$, and the only other force acting in the vertical direction is the upward normal force exerted by the surface. Since there is no motion along the vertical, the normal force must equal $(M+m)g$.
As for why $m$ exerts a downward force on $M$, note that $M$ must exert an upward force of $mg$ on $m$ (due to friction) to keep it from sliding downward. This means that, by Newton's third law, $m$ exerts a force on $M$ of the same magnitude $mg$, but downward. You can also arrive at the conclusion that the normal force on $M$ must be $(M+m)g$ by using this fact and drawing a free body diagram for $M$, using the same reasoning as enumaris has described.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
An inverted bottle stops water flow, but does not when connected by a tube? I'm wondering why an inverted bottle doesn't overflow a container it fills once it reaches the opening, but when a tube is used then the water drains out completely causing the container to overflow and the bottle to become crushed/implode?
I think it has something to do with raising the height of the bottle, which increases the amount of pressure of the water going into the container.
Given this, is there a way to achieve maintaining the water level when the water bottle located much higher than the container?
| The tube system implodes the bottle because the height of the bottle above the lower reservoir determines the strength of the suction that develops inside the bottle. This follows from the laws of hydrostatics about which you can learn more on wikipedia. To prevent the bottle from imploding (and therefore draining out and overflowing the lower reservoir) at greater heights above the lower reservoir you need a more rigid bottle which can sustain more suction without collapse.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is a minimal set of quantities fully describing the source of a magnetic field? Assume I would like to compare different magnetic fields without knowing what generates them. What is the minimal set of physical properties describing a field that would let me calculate all the other properties of this field?
I would definitely need to be able to find vector potential at every point ($\mathbf A(x,y,z) $).
Currently I am thinking that magnetic moment alone is enough, is this the case?
However, for example this wikipedia article comparing Earths magnetic field to a dipole talks about three values of vector $\mathbf B$ (radial, asimuthal, and the magnitude).
So what is the minimal set of quantities that is enough to derive the rest?
| There is no such set of minimal quantities, unless you know pretty much everything about the fields to begin with.
As a simple example, consider the magnetic field produced by a dipolar surface current distribution confined to the surface of a sphere of radius $a$, given by $\mathbf K(\theta,\phi) = K_0 \cos(\theta) \hat{\boldsymbol \phi}$: this will produce a purely dipolar magnetic field,
$$
\mathbf B(\mathbf r) = \frac{\mu_0}{4\pi}\frac{m}{r^3}\left[2\cos(\theta)\hat{\boldsymbol r}+\sin(\theta)\hat{\boldsymbol \theta}\right],
$$
whose amplitude only depends on the magnetic dipole moment $m\propto K_0 a^4$. That looks innocent, but it means that if you shrink the sphere to some smaller radius $b<a$, and you increase the current density by an equal amount, then there will be absolutely no trace of the change in the magnetic field for at positions outside the original sphere.
This is a generic feature, in that you cannot tell what the sources of a magnetic field are if you know its behaviour on a limited bit of space, and you therefore can't infer what's happening with the field in the regions of space you don't have explicit information about. (In the examples above, the magnetic field is uniform inside the sphere - but a sphere of what size? you can't tell with data from $r>a$.)
Thus, to get that kind of information, you need $\mathbf B(\mathbf r)$ at all positions $\mathbf r$, and that information then encodes the current sources themselves through Ampère's law,
$$
\mathbf{J} = \frac{1}{\mu_0}\mathbf{\nabla} \times \mathbf{B}(\mathbf r).
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does electric potential mean in a circuit? As we know that electric potential at a point is defined as a work done by me to carry unit charge from infinity to that point. How can I use this definition in an electric circuit that contains a battery? Suppose electric potential at a point in a circuit is 4 volt. What does this means? l. Is it means that work done by me to carry unit charge from infinity to that point in a circuit is 4 Joules?
| When we talk about a potential we actually always mean a potential difference i.e. the difference from the potential at some convenient reference point. That's because we can only ever measure potential differences and not absolute values. So when you say:
electric potential at a point is defined as a work done by me to carry unit charge from infinity to that point
What this actually means is that the potential difference between infinity and some point $\mathbf r$ is the work done per unit charge to move a charge from infinity to that point.
Now suppose we have a battery with a potential $V$. What we mean by this is that to transport a unit charge from the cathode to the anode inside the battery takes an amount of work equal to $V$ i.e. $V$ is the potential difference between the terminals of the battery.
When we put the battery in a circuit we normally take the anode to be our zero point for the potential. So when we say the voltage at some point in the circuit is $V$ we mean that the potential difference between the anode and the point is $V$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Venus, Earth and Mars Magnetic fields Why does Earth have a magnetic field, while it appears that Venus and Mars have none or very little?
| A geodynamo requires a fluid that can carry a current. A widely held but incorrect explanation for Mars' lack of a magnetic field is that Mars' core is frozen solid. Gravitational observations of Mars show that its core is at least partially molten, just as is ours. While a frozen core would explain Mars' lack of a magnetic field, this explanation does not pan out.
A geodynamo also requires rotation. A widely held but most likely incorrect explanation for Venus' lack of a magnetic field is that Venus' rotation rate is too small. High fidelity geodynamo models show that Venus' rotation rate, although small, is large enough to have the potential to sustain a geodynamo.
A geodynamo also requires a sufficiently high heat flux from the liquid core to the mantle to get the fluid moving. This offers a more modern explanation for why Venus and Mars currently do not have an intrinsic magnetic field while the Earth does. Venus and Mars have stagnant lids: No active vulcanism, no active plate tectonics. Hypervulcanism is an extremely efficient mechanism for a planetary object to transfer heat from the core to the surface (and then to outer space). Plate tectonics is a good second option.
A lack of hypervulcanism and a lack of plate tectonics on the other hand results in very little heat escaping from the core. While the cores of Venus and Mars are hot, molten, and spinning, with minimal heat transfer to the mantle, conduction wins over convection. With little or no convection, there's not enough convective currents to sustain a geodynamo.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
What are the initial conditions associated with solving the geodesic equation in General Relativity? Can we say that initial conditions for solving the geodesic equation in general relativity be intial velocity of a particle?
| The geodesic equation
$$\frac{d^2 x^{\mu}}{ds^2} + \Gamma^{\mu}_{\rho\sigma} \frac{dx^{\rho}}{ds}\frac{dx^{\sigma}}{ds}=0$$
is nothing more than a set of (coupled) second-order differential equations for the particle's coordinates as a function of some parameter $s$. The explicit solution
$$x^{\mu}(s)$$
requires an initial coordinate position $x^{\mu}(s_0)$ and an initial coordinate velocity $\frac{dx^{\mu}(s_0)}{ds}$. Note it is often not practical to find an explicit solution, and it is more useful to study various aspects of the equations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What are the eigenvalues of $L_+$ and $L_-$? I'm studying angular momentum in quantum mechanics. My question involves the operators $L_+=L_x+iL_y$ and $L_-=L_x-iL_y$; in a problem I have a Hamiltonian, $H$, depending an $L_y$, $L^2$ and $L_z$. The solutions suggest to write $L_y$ as a combination of $L_+$ and $L_-$ and then, using the eigenvectors of $L_z$ and $L^2$, write the matrix associated with $H$, and then diagonalize the matrix. How is this possible? How can $L_y$ and $L_z$ be diagonalized in the same basis?
Sorry for bad English.
| If you write the matrix representation of $L_y$ in a basis where $L_z$ is diagonal, you should get (assuming $\ell=1$) something like
$$
\hat L_z=
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -1 \\
\end{array}
\right)\, ,\qquad
\hat L_y=\frac{1}{2i} \left(
\begin{array}{ccc}
0 & 1 & 0 \\
-1 & 0 & 1 \\
0 & -1 & 0 \\
\end{array}
\right) \tag{1}
$$
Of course $\hat L_y$ is not a diagonal matrix there is nothing to prevent you from writing the Hamiltonian and diagonalize the resulting matrix. The eigenstates will not be eigenstates of either $L_y$ or $L_x$ but that's not a big deal: they will be expressed as combinations of your basis eigenstates of $L_z$
However, since $a L_y+b L_z$ commutes with $L^2$, the latter will still be diagonal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to get the fourth component of EOM in a relativistic formulation of a charged particle in an electromagnetic field? We consider in Lorentz spacetime, $(x^0,x^1,x^2,x^3)=(t,x,y,z)$, choose the unit of time such that $c=1$.
Given a four vector $A_\mu$, and let the Lagrangian
$$L(x^i,\dot x^i,t)=-m\sqrt{1-\dot x_i\dot x^i}+qA_0+qA_i\dot x^i,\tag{1}$$
where we use Einstein's convention for summation. (See video at 35:15 with electron charge $q=-e$.)
By the Euler-Lagrange equation
$$\frac{d}{dt}\frac{\partial L}{\partial\dot x^i}-\frac{\partial L}{\partial x^i}=0\tag{2}$$
we can get three equations, and if we use the fact that
$$\frac{d}{d\tau}=\frac{1}{\sqrt{1-\dot x_i\dot x^i}}\frac{d}{dt}\tag{3}$$
then these equations are, for $i=1,2,3$
$$m\frac{d^2x^i}{d\tau^2}=q(\frac{\partial A_\mu}{\partial x^i}-\frac{\partial A_i}{\partial x^\mu})\frac{dx^\mu}{d\tau}\tag{4}$$
My question follows, can we obtain a fourth equation which is just letting $i=0$?
The video at 59:30 says that this fourth equation is from the property that the action is invariant under Lorentz transformation. But I think this is not very convincing. I tried to view the first three equations as two four vectors having three equation space components, but I do not think this fact can lead to the equality for the last time component.
References:
*
*L. Susskind, Special Relativity, video lecture 7, May 21, 2012.
| The action you start with is
$$
S = \int d\tau L
$$
and
$$
L = - m \sqrt{ - \eta_{\mu\nu} {\dot x}^\mu {\dot x}^\nu } +q {\dot x}^\mu A_\mu \, .
$$
This action has a gauge symmetry, which is reparameterization invariance, $\tau \to \tau'(\tau)$. In order to write down your Lagrangian you choose the gauge $\tau = x^0 = t$.
Once in this gauge, you can derive equations of motion for $x^i$ but not $x^0$ because you have gauge-fixed it. These are the ones you have written down.
However, since you are fixing a gauge, there's going to be a gauge constraint which is essentially the equation of motion derived w.r.t. $x^0$ in the first action. This is the equation you are missing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Where does the fine structure constant come from? I have this question: Where does the fine structure constant come from? Is it derived? Is it assumed? I will be most thankful if you will also include other detailed info that you think is also good to know, or just suggest a reading on it.
| The electrostatic force between two point charges $q_1,\,q_2$ separated by a distance $r$ is proportional to $q_1 q_2 r^{-2}$, but it has the same dimension as $\hbar c r^{-2}$. Therefore, a dimensionless value $\alpha$ exists for which the charge between two "unit" charges (e.g. electrons) is $\alpha \hbar cr^{-2}$. Equating this to $ke^2 r^{-2}$ with $k:=(4\pi\varepsilon_0)^{-1}$ gives $\alpha = ke^2(\hbar c)^{-1}$.
There's a further subtlety. A charged particle rotates pairs of charged virtual particles due to attracting one and repelling the other. For example, virtual positive charges end up slightly closer to an electron than their partner negative charges too. This shields bare charges, and empirical charges depend on the probed length scale and hence the probing energy scale. Thus $\alpha\propto e^2$ is a "running" coupling parameter, approximating $1/137$ at low energies. If you're looking for a theoretical answer as to why that value arises, we don't have one yet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 1
} |
Why do we use capacitors and not batteries in defibrillator? Why do we use capacitors in defibrillators and not batteries?
I know that capacitors are used to store electrical energy but isn't the function of a battery just the same?
Moreover, I know that batteries are used to make capacitors work in a defibrillator, but isn't a battery just enough to make it work? Why is a capacitor so fundamental in a defibrillator?
And the last thing that makes my doubts stronger is that a battery normally has a much higher voltage compared to a capacitor.
| The defibrillator requires a high voltage to do its job. ordinarily this would require a very large battery stack (hundreds of individual cells) to achieve the voltage requirement. Instead, defibrillators use a smaller battery pack to drive a chopper circuit that steps the voltage up through a transformer, after which the result is rectified, filtered, and stored in a low-leakage capacitor bank. this minimizes the weight and bulk of the machine as well as its cost.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64",
"answer_count": 6,
"answer_id": 1
} |
Small inter nuclear separation limit for Diatomic molecule Let’s take the a simple $H_2^+$ molecule, where there is only electron which is $r_a$ away from the first proton and $r_b$ away from the other one.
Let’s call the separation between the two protons $R$.
As $R\rightarrow \infty$, the electron will stick to one of the two protons, so the wavefunction will be: $$ \phi = N_{\pm}(1s_a \pm 1s_b).$$
I recognise the two solutions as the gerade and ungerade orbitals. The $1s$ means ground state around each proton.
I can work out the normalisation constant to be $$ N_{\pm} = \sqrt{\frac{1}{2(1\pm S)}}, \, S = \int 1s_a 1s_b \mathrm{d}^3 r $$
Now in the limit of $R \rightarrow 0$, the gerade solution becomes just $1s$ which makes sense, but the ungerade is not defined - what happens to it?
| Let $\Psi(\vec{r})$ be $1s_a$ wave function and $\Psi(\vec{R}+\vec{r})$ be $1s_b$ wave function. As $\vec{R} \to 0$ we have for ungerade state:
$$
\phi_{-}(\vec{r}) = N_{-}(\Psi(\vec{R}+\vec{r})-\Psi(\vec{r})) \approx N_{-} \nabla\Psi(\vec{r})\vec{R}
$$
Thus the limit (not normalized) is $\nabla\Psi(\vec{r})\vec{n}$, where $\vec{n} = \vec{R}/|\vec{R}|$. This function has nodal plane as Buzz said. Also it has discontinuity at $\vec{r} = 0$. But I think this function does not coincide with $2p$ state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is it possible to have a non mathematical explanation of the dependence of Pair Production cross section with energy? Cross section of pair production with interaction of photon increases with energy. But why does that happen? I want a non mathematical answer on this.
| A photon, no matter how much energy it has, will not turn into a pair of particles because of conservation of momentum and energy. The photon does not have a center of mass because it has no rest frame . Any pair of particle-antiparticle will have a rest frame, so there is a reductio ad absurdium.
In order for a pair to appear there must exist a target nucleus which will interact with the gamma and together with the pair obey energy and momentum conservation, the nucleus taking up momentum and energy.
So any cross section will depend on the nucleus, i.e. the target, that the gamma hits. There are tables on this, depending on the nucleus.
This is a simple Feynman diagram that also shows pictorially what is necessary to get a cross section to first order:
Feynman Diagram of electron-positron pair production. One can calculate multiple diagrams to get the cross section
Thus the cross section will depend on the field of the particular nucleus. For example this plot (fig 2.2) has the cross sections calculated for pair production in carbon, for the specific needs of an experiment:
The cross section rises with energy because the higher the energy of the gamma, the more it penetrates the nucleus, where the electric fields are stronger,(1/r^2). Higher frequencies probe smaller distances, and the probability of interacting with the electric field larger. There is a saturation when the gamma energy is high enough, it will scatter with the quarks which exist in the nuclei, and create jets. The specific pair production crossection reaches a plateau. Also as the energy rises, more pair production channels will open .
This is about non mathematical as I can get.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In pilot wave theory where is the wave? As a non specialist, for a single particle system it's easy to appreciate the concept of a pilot wave extending through all Euclidean space, guiding a particle which ends up at a location determined by the pilot wave and its initial location.
For multiple particles however the wave would presumably need more dimensions to reflect the configuration space of the system.
Is this correct, and if so where does the pilot wave reside?
A related question may be, if quantum computers give an exponential speedup for factorization, then according to pilot wave theory where does the computation take place?
| The "pilot wave" is just the usual wave function of quantum mechanics.
If you have $N$ spinless particles, it is a map: $\psi: \mathbb{R}^{3N} \to \mathbb{C}$. This means it lives in the 3N-dimensional configuration space of the particles. I should emphasize that this is not a specialty of pilot wave theory but just the usual framework of quantum mechanics.
The special thing in pilot wave theory are the additional actual particles that actually have positions in $\mathbb{R}^3$ and thus link the abstract object $\psi$ with objects that can be thought of as moving in our physical world, as the objects that tables, chairs and people are made of.
I don't totally understand the intent of your last question, but let's try: The obvious answer is that if a computer computes something, this computation takes place inside of the computer. You know, configuration space is not really a "place" in any reasonable sense. It's an abstract mathematical way of describing the things that happen when, e.g. in a quantum computer, particles move back and forth.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Wave in expanding medium How is the behaviour of a wave modelled in a medium that is expanding faster than the wave is propagating within it?
I ask obviously because of the applicability of the question to the concept of an expanding universe.
Also, if energy cannot be created or destroyed but only transferred, and if energy has been lost from a part of the system in the emission of the wave, where is (or what is the nature of) the double-entry for the part of the system that gains energy, once all further parts of the system become unreachable for the wave due to the expansion of its medium?
| As the wave is moving, it loses energy due to various forces like friction acting on it. Hence, the wave will come to a stop in the end.
Also, if energy cannot be created or destroyed but only transferred, and if energy has been lost from a part of the system in the emission of the wave, where is (or what is the nature of) the double-entry for the part of the system that gains energy, once all further parts of the system become unreachable for the wave due to the expansion of its medium?
The double-entry part of the system will be at the end of the wave, since the wave will ultimately stop moving with the ever-expanding medium, since the wave cannot go on forever.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Mixing Gases for laser Hello stack exchange community! I finally found a way to ionize air for less than $200! 1 small problem is mixing gases is coming out to be very difficult and i dont know if this is the right place to ask but here we go. What is my best way of mixing gases like co2, air, and helium = 1:1:6. What is the best way of mixing these gases? There is a russian i am talking to about my laser and he said he uses a car tire to mix stuff but thats not a very clean way of doing things. Whats the best way to mix these gases?
| This really isn't the correct forum. Engineering would better address your question, but here's my thoughts.
If accuracy in mix is critical then your best approach is to use flow feedback controllers with mass flow meters that all mix into your chamber or manifold.
But it sounds like you are trying to keep the price low so then using an open loop approach, inexpensive pressure regulators and orifice plates for each gas. You can calculate the required diameter given upstream/downstream pressure. Best to keep the flow sonic (critical) so that mass flow rates are not so much affected by downstream pressure fluctuations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
why is the photopeak at a higher energy than the compton edge? Why does the photoeffect deposit more energy than interactions via Compton scattering?
Or the other way around: Why is the photopeak right (at a higher energy) than the Compton edge?
https://en.wikipedia.org/wiki/Compton_edge
I know that the interactions vary with the incident photon energy (from photoelectric effect to Compton effect to pair production). Therefore, I thought that Compton and pair production deposits more energy?!
| In short, the photopeak is formed in the case of complete absorption of the gamma ray's energy in the scintillator or detector, while the Compton edge is the maximum amount of energy absorbed by the scintillator in the process of Compton scattering, where there is an incomplete absorption of the gamma ray's energy as it scatters off of the detector.
Let's call the energy of the incident gamma ray $E.$ The photopeak occurs when the amount of energy transferred to the scintillator or detector, $E_{T},$ is equal the the energy of the incident gamma ray,
$$E_{T}=E.\tag{1}\label{1}$$
In a Compton scattering process, the amount of energy exchanged by the gamma ray and an electron in a material depends on the angle that the gamma ray is scattered through, and is given by the formula
$$\frac{1}{E'}-\frac{1}{E}=\frac{1}{m_{e}c^{2}}(1-cos\theta),$$
where $E$ is still the energy of the incident gamma ray, $E'$ is the energy of the scattered gamma ray, $m_{e}$ is the mass of the electron, $c$ is the speed of light, and $\theta$ is the angle that the gamma ray is scattered through. This can also be written
$$E'=\frac{E}{1+\frac{E}{m_{e}c^{2}}(1-cos\theta)}.$$
The amount of energy exchanged is
$$E_{T}=E-E'.\tag{2}\label{2}$$
We see that the amount of energy exchanged is maximized when $\theta$ approaches $180$ degrees, at which point the energy transferred is
$$E_{T}=E(1-\frac{1}{1+\frac{2E}{m_{e}c^{2}}}).$$
We call this maximum amount of energy absorbed during Compton scattering the Compton edge.
Since the energy of the scattered gamma ray must be positive, we see that the energy transferred to the detector through Compton scattering $\eqref{2},$ which has its maximum at the Compton edge, is necessarily smaller than the energy transferred when all of the gamma ray's energy is absorbed by the detector $\eqref{1}.$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Stone-von Neumann theorem According to Stone-von Neumann theorem, any two canonically conjugate self adjoint operators following the relation: $$[\hat{q},\hat{p}]=i\hbar$$ cannot be both bounded.
I am confused about how we prove this part and what does it mean physically?
Can anyone explain?
| I commented that the Stone-von Neumann theorem is not a proof for the statement in the beginning of the question. The original proofs of the Wielandt-Wintner theorem (incidentally proved only in 1947-1948, while the Stone-von Neumann theorem had a satisfying proof by von Neumann already by 1931) are found in:
Wintner, A. - The Unboundness of Quantum-Mechanical Matrices (1947, The Physical Review, Vol. 71, p. 738-739)
Wielandt, H. - Über die Unbeschränktheit der Operatoren der Quantenmechanik (1948, Mathematische Annalen, p. 21).
The essence of Wielandt's proof is note 6 of the quoted Wiki page:
The significance of having unbounded operators of coordinate and momentum on the real axis (1D) is that the particle's "quantum motion" is unrestrained, in the sense that either the coordinate or the momentum can be measured to an arbitrary high value (infinite in the limit), i.e. mathematically, unbounded operators do not have a bounded spectrum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Does radiation cause a change in temperature? If yes, then is there a limit to the temperature decrease? If no, then can the body which radiates heat attain an absolute zero temperature?
| Everything is gaining and losing heat all the time, partly by radiation, and partly by other processes, such as conduction. The temperature of an object changes until all of these heat fluxes sum to zero, at which point it is in equilibrium and the temperature remains constant. If you could put an object in an infinitely large, utterly empty space, so that there was no radiation produced by anything but the object, then yes, it would radiate its energy away. But there is no infinite empty room. Even in outer space there is cosmic background radiation, so the object would never drop much below 3 K even there. At that point, it would be gaining heat from absorbed radiation as fast as it lost it from emitted radiation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Is Mass Flow an Additive Property? Mass ($m$) is an additive property in the sense that the total mass within a system can be simply determined by adding up the mass of each individual substance that's in it.
However, if two mass flows ($m/t$) of different liquids were to be mixed to form a single flow, would the resulting mass flow be equal to the sum of each individual mass flow?
| (Assuming that by "mass flow" you mean the mass flow rate $\dot m$)
Yes. Since mass is a conserved quantity, it obeys the continuity equation in the form
$$\frac{\partial \rho}{\partial t} + \nabla \cdot \mathbf j =0$$
where $\rho$ is the mass density and $\mathbf j$ is the mass flux.
As a consequence, if two flows with mass flow rates $\dot m_1, \dot m_2$ mix in a single flow $\dot M$, you will have
$$\dot m_1 + \dot m_2 = \dot M$$
If this wasn't true, it would mean that some mass was lost or created during the mixing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tesellation: What does the trace of a rotation matrix means? The crystallographic restriction theorem says that you cannot have a periodic lattice with $n$-fold rotation symmetry, with $n$ different from 1,2,3,4 and 6 (for 2D and 3D).
There are many ways to prove the theorem, see the Wikipedia article. I understand some of the them, but one of the proof goes like this:
Consider a periodic lattice that is symmetric with respect to $n$-fold
rotations around a given axis. The trace of the matrix associated to
the spatial rotation around the given axis is either $2\cos\left(\frac{2\pi}{n}\right)$
(2D) or $1+2\cos\left(\frac{2\pi}{n}\right)$ (3D). As the rotation matrix maps lattice
points into other lattice points, then the trace has to be an
integer. The only solution to this is condition is $n$ to be equal
to 1,2,3,4 or 6.
The solution and why the trace is like that I understand by simply writing the rotation matrix, but I would like to have more insight on why the trace has to be an integer in order to be a representation of a symmetry operation of the lattice.
In general, is there any meaning to trace=integer?
| Consider transformation of a set of primitive translation vectors $e_a$, $a=1...d$ of a $d$-dimensional lattice under rotation $O$:
$$
Oe_a = \sum_{b=1}^d k_{ab}\ e_b.
$$
If rotation is a symmetry of a lattice then coefficients $k_{ab}$ are integers. Thus rotation matrix written in $e_a$ basis has integer elements and integer trace. Trace is invariant under linear transformations. Hence rotation matrix written in any basis has integer trace.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Physical significance of the zeroth component of 4-velocity and 4-force Is there any physical significance of the zeroth component of the four velocity vector and four force vector? I understand that the space part of u$^\mu$ is related to ordinary velocity and space part of F$^\mu$ is the usual force. But are there any physical quantity related to the zeroth component of u$^\mu$ and F$^\mu$?
The zeroth component of four momenta, p$^\mu$ is energy. So, similarly are there any physical significance to u$^0$ and F$^0$component?
| The zeroth component of a 4-vector is often referred to as its "time-like" component because it is analogous to the time axis in $(t,x,y,z)$ spacetime. So physically saying, components such as $u^0$ or $F^0$ are simply the same as their spatial cousins, with a difference of a factor of $c$ ($m/s$) for dimensional consistency ($x,y,z$ are measured in metres, whereas $t$ is measured in seconds). On a deeper level, temporal and spatial coordinates can be treated similar due to the fact that the speed of light remains unchanged regardless of the frame of reference. For that to be possible, the length of the vector $u^0$ must remain the same under any spacetime transformation, which implies a shift in time, making time variable and loses its absoluteness.
For a more visual explanation, I'd suggest Henry Reich's YouTube playlist on special relativity: https://www.youtube.com/watch?v=ajhFNcUTJI0&list=PL712E709B05086D32
If you wish to learn more about the mathematics of special relativity, the Lorentz transformation would be a good start:
*
*Lorentz transformation: https://en.wikipedia.org/wiki/Lorentz_transformation
*Derivation: https://en.wikipedia.org/wiki/Derivations_of_the_Lorentz_transformations
I hope this was useful!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How to measure a static electric field? I looked up google but didn't find any design for measuring electric field that doesn't vary with time.
My own idea is to use two parallel plates (like a capacitor but without the dielectric). In an electric field E a potential difference V = Ed (d is separation between the plates) will develop, which can be measured using a voltmeter. Will this work?
| According to this source, there are electric field probes based on three orthogonally placed dipole antennae. Such probes have applications ranging from measuring radiation levels in fields to satellite detection of earthquakes.
A dipole often orients itself in the direction of the electric field.
Thinking something wild here, a detector you could carry; in theory, you should be able to measure static electric field using a light charged metal wool in a glass sphere:
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Why are two solutions to the field equations necessary to get the full Schwarzschild metric? For a long time I've wondered why it was/is necessary to have separate solutions to the field equations for the interior and exterior metrics of a Schwarzschild black hole. Is there something weird going on at the event horizon that makes a single solution mathematically impossible? Has anyone ever found a single solution? It just seems odd that it would be necessary to find two separate solutions and then join them at the horizon. I'm not a mathematician so I would appreciate a general, non-technical answer if that's possible.
| They're not two separate solutions. It's just that when you express them in a particular set of coordinates, the Schwarzschild coordinates, the coordinates misbehave at the horizon. There are other coordinates, such as the Kruskal-Szekeres coordinates, that don't have this problem.
The other thing to realize is that it just isn't normally possible to cover a manifold with one set of coordinates and have the coordinates be well behaved everywhere. If you impose x-y Cartesian coordinates on North America, they will end up misbehaving if you try to extend them to cover the whole globe. Latitude-longitude coordinates misbehave at the poles.
By the way, the two regions of spacetime that you have in mind are only half of the maximally extended Schwarzschild spacetime.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is chewing gum only elastic for a brief period when pulling it out of your mouth? Assume you are chewing some gum and pull it out of your mouth like so:
If you release the gum quickly it will spring back to your mouth as if it is elastic, but if you leave it for a few seconds then release it will just fall down like a piece of string. What is happening in those few seconds to get rid of its elasticity?
| Gum acts springy on short timescales and like a very viscous liquid on longer timescales. That is, its stress-strain behavior is time-dependent. Here is why:
The gum consists of long molecules with kinks and bends in them, oriented in random directions and tangled up with one another. when you quickly pull on the gum, the bent molecules unbend like springs and also get snagged against their neighbors, urging them to unbend too- and they offer resistance, pulling back like springs. but with just a little time, they begin to slowly untangle and slip past one another, relieving the spring stresses, and then they flow like goop. this is called viscoelastic behavior and is common in rubber-like materials.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Gauss's Law for Gravity to find the Gravitational field of a finite rod To find the gravitational field at Point P in the figure:
One solution is to draw the field of a mass $$\mathrm{d}\vec{g} =\frac{G\,\mathrm{d}m}{r^2}$$ and integrate over $\mathrm{d}m$, adding vectorially.
However, if one uses Gauss's Law for Gravity: $$\oint \vec{g} \cdot \mathrm{d}\vec{A} = -4 \pi G M$$ one can find the field of the vertical rod easily (by considering a Gaussian cylinder centered on the rod with height $2a$ and radius $2a$).
My question is, is there a way to apply it to the horizontal rod as well?
Thanks.
| Gauss's law is a important and when first introduced it is applied to simple situations the show that it predicts values for gravitational fields which are consistent with those found when using the inverse square law.
So you start with a point mass and draw a Gaussian sphere centre on the mass around it.
Applying Gauss's law is straight forward because the gravitational field is perpendicular to the surface and has a constant magnitude so the product $\vec E \cdot d \vec A = E\,dA$ and hence the magnitude of the field can be found.
Now look at trying to find the gravitational field due to two point mass.
So the red Gaussian cylinder might be the first choice however two problems are immediately encountered if one wants to use Gauss's law.
The gravitational field lines are not at right angles to the surface and the magnitude of the electric field is not the same at each point on the Gaussian surface.
The green lines are equipotential surface which are at right angle to the field line and so can be used as the Gaussian surface but think of the complexity of doing the integration.
Your rod can be thought of as a series of point masses all lined up in a row adding to the complexity of the integration.
So using Gauss's law for both charges at once is going to be a non starter.
Better to use Gauss (or the inverse square law) for each charge individually and then add then find the resultant field by superposition.
This illustrates the point that simple application of Gauss's law requires the system to have symmetry and those relatively simple examples where Gauss's law was used were carefully chosen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is the notion of Lebesgue Measure a necessary construct for statistical physics? In chat last night a user and I were discussing the "physical" meaningfulness of the notion of lebesgue measure. In particular, we were curious as to whether physicists can "make do" without it. I mentioned that the dominated convergence theorem is needed to prove certain theorems in statistics that would be needed in areas like statistical thermodynamics, where you want to know that when dealing with a huge quantity of particles things like velocity/energy are approximately normally distributed (Central Limit Theorem). We were then surprised to find a proof the CLT that no only was free of the DCT, but formulated entirely in terms of the Riemann Integral.
My question is: Are there any specific areas in physics that rely on the notion of the lebesgue measure? (either directly or indirectly via theorems for which this notion is needed to prove). To the point of being necessary and not merely useful?
| edit I have edited the answer to deal with some of the criticism in the comments
To the extent that the Lebesgue measure is needed to define Lebesgue integration, it is central to Quantum Mechanics: in general we require wavefunctions, as a function of position, to be Lebesgue square integrable.
More specifically, in QM states respond to rays in a Hilbert space. Hilbert spaces are complete inner product spaces, and the Lebesgue integral is required to complete the relevant Hilbert space, see When is Lebesgue integration useful over Riemann integration in physics?
It is true that there is nothing special about the position basis, but this requirement cannot be escaped: the Lebesgue measure is necessary to define an appropriately square normalisable wavefunction in the position basis.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Primordial Black Holes mass diference with isomass stellar black holes How can we distinguish, for a given mass (measured from gravitational waves experiments and or other experiments) of a black hole or black hole binary, if they are PRIMORDIAL or they are stellar black holes or any other weird origin?
| That is very difficult. A black hole time dilates and redshifts radiation emitted by objects so it becomes virtually impossible to detect. As a result a black hole formed in the big bang and one formed by stellar collapse appear indistinguishable. The classical idea of a black hole is that it has “no hair,” which is to say there are no features other than mass, angular momentum and charge that defines the black hole. There is no additional “hair” on the horizon that distinguishes one black hole from another.
I have written an essay for the FQXi contest on detecting quantum hair on black holes in the gravitational wave signature generated by the coalescence of two black holes. The classical gravitational field acts as a form of Heisenberg microscope that amplifies quantum hair on the horizon. The condition of this collision provide the conditions so these signatures are propagated in gravitational radiation. I leave the detailed reading up to the reader's interest in reading my paper. There is also a supplementary segment for mathematical details. Unfortunately this is fairly complicated and mathematical. It also involves some of the mathematics of Maryam Mirzakhani in her work on geodesics on hyperbolic spaces. The near horizon condition of a black hole is $AdS_2\times \mathbb S^2$ and this leads to some of this analysis with hyperbolic geometries.
This will mean black holes have in their quantum details a large amount of information. It might then be possible to distinguish black holes formed by stellar collapse and putative primordial black holes. This is only possible under black hole coalescence. I am not sure though about what would happen if a primordial black hole and one formed by stellar collapse merge.
This is somewhat conjectural and I suspect there are those who will down vote it.
LC
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Has anyone driven a bell or tuning fork using light? In principle a metal bell or tuning fork of sufficiently high quality factor could be driven by audio frequency radio waves of sufficient power to produce an audible hum. Has this been done, yet? If not, what combination of quality factor and transmission power would be needed to do this? Ideally, this would be done using far field components, so it would be photons and not the sort of near-field driving done in induction based transformers, but this requirement is not essential.
In principle, this is how ordinary speakers work, of course, as electromagnetic coils drive forces on magnets that are coupled to membranes. The main purpose of the question is about having that visible gap, and the object that is ringing for "no reason".
| You can use, for example, photoacoustic effect in gas inside a resonator (https://www.ibp.fraunhofer.de/content/dam/ibp/de/documents/Kompetenzen/Akustik/Photoakustik/pdf1_tcm45-48829.pdf)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits