Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Confusion regarding the finite square well for a negative potential Consider the finite square well, where we take the potential to be $$V(x)=\begin{cases}
-V_0 & \text{for}\,\, |x| \le a \\
\,\,\,\,\,0 & \text{for}\,\, |x|\gt a
\end{cases}$$ for a positive constant $V_0$.
Within the square well the time-independent Schr${ö}$dinger equation has the form $$-\frac{\hbar^2}{2m}\frac{d^2 u}{dx^2}=(E-V)u=(E+V_0)u\tag{1}$$
While outside the square well the equation is
$$-\frac{\hbar^2}{2m}\frac{d^2 u}{dx^2}=Eu\tag{2}$$ with $E$ being the total energy of the wavefunction $u$ where $u=u(x)$.
The graph of the potential function is shown below:
Rearranging $(1)$ I find that
$$\frac{d^2 u}{dx^2}=-\underbrace{\bbox[#FFA]{\frac{2m}{\hbar^2}(E+V_0)}}_{\bbox[#FFA]{=k^2}}u$$
$$\implies \frac{d^2 u}{dx^2}+k^2u=0\tag{3}$$
with
$$k=\frac{\sqrt{2m(E+V_0)}}{\hbar}\tag{A}$$
So equation $(3)$ implies that there will be oscillatory solutions (sines/cosines) within the well.
Rearranging $(2)$ I find that
$$\frac{d^2 u}{dx^2}=-\underbrace{\bbox[#AFA]{\frac{2m}{\hbar^2}E}}_{\bbox[#AFA]{=\gamma^2}}u$$
$$\implies\frac{d^2 u}{dx^2}+\gamma^2u=0\tag{4}$$
with
$$\gamma=\frac{\sqrt{2mE}}{\hbar}\tag{B}$$
But here is the problem: Equations $(4)$ and $(\mathrm{B})$ cannot be correct since I know that there must be an exponential fall-off outside the well.
I used the same mathematics to derive $(4)$ & $(\mathrm{B})$ as $(3)$ & $(\mathrm{A})$. After an online search I found that the correct equations are
$$\fbox{$\frac{d^2 u}{dx^2}-\gamma^2u=0$}$$
and
$$\fbox{$\gamma=\frac{\sqrt{-2mE}}{\hbar}$}$$
Looks like I am missing something very simple. If someone could point out my error or give me any hints on how I can reach the boxed equations shown above it would be greatly appreciated.
EDIT:
One answer mentions that the reason for the sign error is due to the fact that $E\lt 0$ inside the well, so I have included a graph showing the total energy (which is always less than zero inside or outside the well):
EDIT #2:
In response to the comment below. If I place $E\lt 0$ in equation $(4)$ (outside the well) I will have to also make $E\lt 0$ in equation $(3)$ (as $E\lt 0$ inside the well also) and so equation $(3)$ will become $$\frac{d^2 u}{dx^2}-k^2u=0$$ which is clearly a contradiction as this no longer gives oscillatory solutions (plane waves) inside the well.
| You are just looking at the general solutions of Schroedinger equation inside and outside the well. In principle, however, you have to solve the eigenvalue problem to find the allowed energy eigenvalues E and wavefunctions for the whole system. For this, you have to use the boundary conditions at the well boundaries and at infinity to find the energy eigenvalues E and wave functions of the whole system. Equations (4) and (B) are correct, because you get the exponentially decaying wavefunctions outside the well only for an eigenvalue E<0 which corresponds to a bound state. When you assume E>0, you have propagating waves both inside and outside the well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Must all types of clocks agree on the time that has passed since the spaceship started accelerating? The above would be true if the spaceship were not accelerating, because SR says that the physical laws are all same in inertial frames of references. If there were a gap in the time of the 2 clocks, a person on the spaceship could measure that gap and calculate his velocity, hence contradicting SR. So, the time on all clocks must be same, irrespective of mechanism, in an inertial frame. But does the same hold true for non-inertial frames?
Edit : The clocks are at the same height.
| This depends on exactly what you are asking. Clocks at different heights in an accelerating spaceship run at different rates. This is discussed in the question Which clock is the fastest inside an accelerating body?.
All clocks are affected by the time difference in the same way, so it doesn't matter whether the clock is atomic, a light clock, a mechanical clock or whatever, all clocks at the same height in the accelerating spaceship will run at the same rate. However a light clock is traditionally quite large i.e. the light travels some large distance then reflects back, and we measure the time by the duration of the light beam's round trip. If the light beam travels up the accelerating spaceship then it will pass through regions of measurably different time dilation and the time it measures will differ from the time measured by a more compact clock.
So if the light clock is small, or if it's arranged so the light beams stay at the same height as they travel, then the light clock will measure the same time as all the other clocks. If the light clock is large and the light beams travel vertically then there will be a difference.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why do bands arise from a lattice of two-site "atoms"? What is the basic reason why an “atom” with a trapping potential with two bound states becomes a system with two bands when a large number of such atoms are assembled into a lattice?
| If you couple a large number N of identical atoms with two energy levels so that they can interact, the energy levels will split up into a closely spaced energy level band with N levels each. This is analogous to the classical case of two coupled identical oscillators where the oscillation frequencies split into two frequencies near the oscillation frequency of the uncoupled oscillators. N coupled oscillators would have N closely spaced oscillation frequencies.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When is the free charge density zero at the boundary of dielectrics It is known that across the interface of two different dielectrics, the electric displacement field must satisfy
$$(\mathbf{D}_2-\mathbf{D}_1)\cdot\mathbf{\hat{n}}=\sigma$$
where $\sigma$ is free surface density charge in the boundary.
My question is: if both materials are dielectrics (i.e. thay have no free charge), how could $\sigma$ (which is free charge indeed) appear at the boundary?
| In dielectrics with different permittivities but no conductivity, there will be no free charge at the interface upon application of en electric field. However, if the dielectrics also possess different conductivities, which leads to a current flowing across the interface, in general, a free interface charge will accumulate at the interface so that the stationary normal electric currents (produced by the normal electric fields together with the conductivities) fulfill the current continuity condition. If the conductivities are equal, there will be no interface charge generation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Can we concentrate magnetic (electromagnetic) field in same way we concentrate light into laser? Sorry if that question is too noobish but i wonder- we can make laser beam and it can go far in distance without loosing much of its power.
To my understanding magnetic field just propagates around its source (like magnet).
Can we do something to concentrate magnetic field so instead propagating in all directions it would "beam" into only one direction?
EDIT
Solenoid can "narrow" magnetic field inside of itself. I thought about something that could be done over air.
| There is no way you can keep a large volume with a magnetic field lines going straight to infinity. Field lines are nothing like a laser, the laser is a wave (propagates according to wave equations) and field lines are abstract concepts that link points with the same value of magnetic field.
However, you can make those field lines very long if you have powerful magnets. Simply take a U-shaped magnet. You know lines are straight between the two legs. So make a bigger magnet, you will get longer lines!
Otherwise, note that the magnetic field can be frozen-in when over a plasma. The field lines follow the plasma. There might be some strange plasma configuration where you push a plasma in a straight line to extend the field lines. I don't know if that is possible and that does not sound like anything easy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
To normalize in a given length, should the wavefunction at the endpoints be zero? I have an assignment question:
A free particle is moving in $+x$ direction with a linear momentum $p$. What is the wave function of the particle normalized in a length $L$?
Do I need to use the boundary condition that $f(0)=f(L)=0$?
| No, when you normalize to a length $L$ that only means you should use a region of length $L$ to do the normalization integral.
$$\int_{x_0}^{x_0 + L} \lvert\psi(x)\rvert^2\,\mathrm{d}x = 1$$
Here $x_0$ can be anything. It shouldn't matter what it is because, presumably, you are assuming the wavefunction is periodic with spatial period $L$. There is no need to impose any boundary condition other than that.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why Does Electric Potential Approach Zero at Infinity: Boundary Conditions for Infinite Conducting Sheets Imagine an infinitely long conducting "trough," as shown in the figure. The two sides are grounded, and the bottom strip is maintained at potential $V_0$. Suppose we want to know the electric potential everywhere between the plates. The electric potential is a solution to Laplace's equation, which also satisfies the required boundary conditions. The boundary conditions are $V(x,y,z) = 0$ on the left and right plates, $V(x,y,z)=V_0$ on the bottom plate, and $V(x,y,z)\to 0$ at $\infty$.
My question is about this last boundary condition. On one hand it seems reasonable, and I've been using it for years. However, I was recently asked why $V$ goes to zero at infinity, and I couldn't answer the question satisfactorily. The issue is that the bottom plate seems to resemble an infinite line of charge when we are far away from it, but we know there are problems with setting $V=0$ at infinity when we're dealing with infinite charge distributions. In the case of a line of charge, $E\propto 1/r$ where $r$ is the distance from the line of charge. If we integrate $E$ from $\infty$ to some finite $r=a$, we find $V(a)=\infty$. (I.e. $V = -\int_\infty^a E dr$ means our calculation of $V$ involves something with $\ln(\infty) - \ln(a)$, which is infinite.) With this in mind, in the case of the conducting plate in the figure, it seems that if we set $V=V_0$ at the bottom plate, then $V$ should decrease forever as we get farther away. That might lead one to say that $V\to -\infty$ as we get infinitely far from the bottom plate.
So, why does $V\to 0$ as we get infinitely far away from the bottom plate? If we had an infinite line of charge, the potential would decrease without bound as we moved away from the line charge. The difference in the case of the conducting sheets must be related to the sides being grounded (thus kept at $V=0$)?
| If you solve Laplace equation outside the trough, you will see that a large part of the electric field lines emerging from the ground plane will end on the sidewalls. Therefore, as you say, the potential goes to zero faster than in the case of a line charge. You would have a similar case with two closely spaced line charges of opposite sign.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If I only roll my object, what angular velocity value will change? Question 1
I know this is a simple question but I just need clarification. This is an honest question so if anyone could help reorientate my brain I would appreciate.
Let say I have a body like this:
Yaw Pitch Roll
If I only try to roll, which angular velocity (X Y Z) will change? This brings me to question 2:
Question 2
LEFT is the Euler angle and RIGHT is the Gyroscope data. This is sampled from an IMU.
I am feeling like something is wrong here, it's like the Angular Velocity X and Z has been swapped. If we look only at the grey line, the Roll is probably less than 5 degrees, but the Angular Velocity Z is huge. Does this makes sense? I'm feeling like the degree/sec of my roll is plotted in Angular Velocity X.
| There is a difference between the net angle change (roll, pitch, and yaw) and angular velocity which is primarily that angular velocity has dependence on time (that is, keeping track of time is essential for calculating it) while roll (or pitch and yaw) can just be calculated at regular intervals of time, where it is just the net rotates angle.
This is, I suppose, field data of a real plane and thus it will have the same angle change as shown by the graph and how early it is achieved is given by the angular velocity. So here, the angle has been achieved fairly quickly which could be because of the will of the pilot or the wind.
So a small change has been achieved really quickly.
P.S. In any case you can't validate the data acquired, because it has simply no links.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If we were on the Moon would Earth appear to be in motion or at rest? If we were on the moon would earth appear stationary or would it appear to move. I think it must be stationary because moon is in sychronous rotation with earth.
| Because the Moon is tidally locked the Earth will be in a nearly fixed place in the sky, while the sun rises and sets once every orbit (about once a month). There's a really cool animation/video from NASA that shows the moon undergoing libration as it orbits the Earth. From the point of view of the moon, the Earth would trace a path in the sky dictated by the libration motion of the moon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 1
} |
Why the clock at rest runs faster, while another clock slows when moving? I have observed from my first question that it is hard for me to study the special relativity from every frame of reference. But, there is one most important question in my head right now that time runs slower for moving body if observe from rest and time runs faster in clock at rest if observe from that moving body. But, the rate at which the ticks slower for one and faster for another is different. Why it is not the same rate? Please answer in brief and simple language.
| Einstein's postulates entail SYMMETRICAL time dilation - either clock is slow as judged from the other clock's system. Instead of honestly deriving this in 1905, Einstein derived, fraudulently and invalidly of course, ASYMMETRICAL time dilation - in his 1905 article the moving clock is slow and lags behind the stationary one which is, accordingly, FAST (this means that the moving clock and its owner travel into the future - if their speed is great enough, they can jump, within a minute of their experienced time, millions of years ahead):
http://www.fourmilab.ch/etexts/einstein/specrel/www/
ON THE ECTRODYNAMICS OF MOVING BODIES, A. Einstein, 1905: "From this there ensues the following peculiar consequence. If at the points A and B of K there are stationary clocks which, viewed in the stationary system, are synchronous; and if the clock at A is moved with the velocity v along the line AB to B, then on its arrival at B the two clocks no longer synchronize, but the clock moved from A to B lags behind the other which has remained at B by tv^2/2c^2 (up to magnitudes of fourth and higher order), t being the time occupied in the journey from A to B."
So even if Einstein's 1905 postulates were true (actually the second one is false), physics would still be dead by now, corrupted by the metastases of the asymmetrical time dilation (moving clocks run slower than stationary ones) invalidly deduced by Einstein in 1905.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
bubble/drop Reynolds number The bubble/drop Reynolds number makes me confused and I hope someone can help me on this please!
Normally (as I read in every books and papers) that when a bubble or drop rises in a fluid, the bubble/drop Reynolds number is calculated by:
Re = ρUD/μ
where U is particle velocity, D can be particle diameter, and ρ and μ are density and viscosity of continuous fluid
my question is why don't use ρ and μ of bubble/drop? why use values of surrounding fluid?
what is the physical meaning of this Re?
Thanks in advance.
| In some cases Reynolds number is used to decide whether the flow of the fluid is laminar or turbulent. So it is the fluid which is important and not the objects (your bubble) which is important.
So the density and the viscosity of the fluid are contained in Reynolds number.
If the flow is not turbulent analysis is a lot easier and in the case of a spherical body if the Reynolds number is much less than one then the Stokes' equation which related the viscous drag $F$ to the viscous drag to the radius of the spherical body $r$, the viscosity $\eta$ and density $\rho$ of the fluid and the relative speed between the fluid and the spherical body $v$.
$F=6 \pi r v \eta$
Reynolds number has other uses in fluid dynamics one of which is scaling when, for example, how to interpret how readings taken on a model boat would translate to those on a full sized ship.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Near Earth vs Newtonian gravitational potential Newton's Law of Universal Gravitation tells us that the potential energy of object in a gravitational field is $$U ~=~ -\frac{GMm}{r}.\tag{1}$$
The experimentally verified near-Earth gravitational potential is
$$U ~=~ mgh.\tag{2}$$
The near-Earth potential should be an approximation for the general potential energy when $r\approx r_{\text{Earth}}$, but the problem I'm having is that they scale differently with distance. $(1)$ scales as $\frac 1r$. So the greater the distance from the Earth, the less potential energy an object should have. But $(2)$ scales proportionally to distance. So the greater the distance from the Earth, the more potential energy an object should have.
How is this reconcilable?
| Given a force $F$, the work done on an object over a distance between two points $s_0$ and $s_f$ by that force is
$$W=-\int_{s_0}^{s_f} Fds$$
In the case of gravity,
$$F=\frac{GMm}{r^2},\quad ds=dr$$
Thus, in the case where $U=W$, $s_0=0$ and $s_f=r$, so
$$U=\frac{GMm}{r}$$
Now, over small distances by the Earth's surface, the force is approximately constant. If we substitute in
$$g\equiv\frac{GM}{r_e^2}$$
and assume that $g$ is essentially constant between our reference point and $h$, we can say that
$$\Delta U=\int_0^hmgds=mg\int_0^hds=mgh$$
So $(1)$ is the actual expression for the potential energy at a point if we assume that $g$ changes; $(2)$ is an approximation if we assume that the change in $g$ is small. This is valid near Earth's surface, as John Rennie showed, but it's generally not valid over large distances.
I should note something about reference points. In the case of $(1)$, $r$ is a coordinate from the center of mass $M$; in the case of $(2)$, $h$ is a coordinate from some arbitrary reference point from the center of $M$. Generally, you could take this to be the radius of the Earth, but it's often unimportant for conservation of energy problems, and you can choose any value that makes the calculations simpler - so long as $g$ is approximately constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 0
} |
Is the gravitational effect of distant galaxies lost forever? Hubble's law is usually expressed by the equation
$$v = H_0D$$
According to this equation, the space between us and very distant galaxies,
is expanding with a speed greater than the speed of light $c$.
As a result the light from these galaxies can no longer be detected.
Can we also assume that the 'gravitational effect' that these galaxies exert can no longer influence our visible universe?
Since these galaxies can no longer interact in any way with other galaxies, does this means that in a way they form their own 'universe'?
How the theory of the 'big crunch' deals with this?
| The influence of gravity and gravitational waves are thought to travel at the speed of light. So what goes for light also goes for gravity.
Galaxies that we see now can already be receding at greater than the speed of light. As Thriveth says in his comments, this is the case for galaxies at redshift more than 1.4. We see them because the light we see was emitted in the past.
The edge of the observable universe and therefore the most distant that objects can be to influence us now, either through light or gravity, is some 46 billion light years away. This called the particle horizon.
There is another horizon at about 16 billion light years which refers to how far away an object can be now such that its light and gravitational waves never reach us in the future. This is called the event horizon.
The exact values of these numbers depend on the cosmological parameters and, in the case of the vacuum energy density, their time dependence.
In an expanding, accelerating universe, these horizon distances do increase, but all galaxies will eventually reach a point where they lie beyond the event horizon and their influence will no longer be felt in the future.
Of course, a big crunch does not happen in an expanding, accelerating universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What would be the atomic number of the atom whose 1s electron moves nearly at the speed of light? What would be the atomic number of the atom (may be hypothetical) whose $1s$ electron moves at $0.99c$ (the speed of light)?
Quantum mechanics might have an answer, but I do not know the necessary maths to calculate. I am interested in the answer.
In this article they say that the speed of the electron defines gold's property through relativistic quantum mechanics.
| Gold has a strong absorption line at 200-300 nm which is for blue photons. The complement of that blue is yellow, so the reflected light looks yellow to the eye.
Yes, the effect is due to special relativity, but it is slightly different for different elements. The line for gold is the 5d to 6s transition, and the relativistic effect makes it come out to be blue. It is similar but not as strong for cesium, which also looks yellowish, just not as much as gold.
Note that you CANNOT do a simple calculation to predict which elements will look yellow. If one thinks that something close to gold, at Z=79, will also look yellow, note that lead at Z=82 is not yellow at all. Note also that it is typically not the 1s electron's energy, it's really the transitions from other orbitals, and the orbital energy differences, that determine the light absorbed, and thus the color.
But most or all those heavy elements do have relativistic electrons.
See it at https://en.m.wikipedia.org/wiki/Relativistic_quantum_chemistry, and also at https://jameskennedymonash.wordpress.com/2014/07/13/why-is-gold-yellow-the-chemistry-of-gold/
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How exactly do I distingish an interpretation from computation? Feynman wrote this in his Quantum Mechanics and Path integrals
To summarize: we compute the intensity ( ... ) of waves which would
arrive in the apparatus at x and then interpret this intensity as the
probability that a particle will arrive at x
I have a hard time distinguishing interpretation from computation. Say, I still do not understand how exactly "the wave function collapse" is an interpretation when I read it somewhere else. (not by Feynman. I do math there too, the $|c_i|^2$ thing)
So how exactly do I distinguish an interpretation from a computation?
| I think you're getting a bit mixed up over the meaning of the word interpret as it is used in two ways in quantum mechanics.
The word interpret isn't a precise scientific term, and in everyday use it means something like assign a meaning to. This is the sense in which Feynmann is using the word. Quantum mechanics is (like all physical theories) a mathematical model so it's just loads of equations. It is meaningful only when we assign physical meanings to those equations i.e. interpret them. What Feynmann is saying is that we interpret $|\psi|^2$ to mean the probability density i.e. the physical meaning of the equation $|\psi|^2$ is the probability density.
There is a second and much wider meaning to interpretation that refers to interpretations of quantum mechanics. This deals with the physical meaning of the whole theory of QM and not just specific bits of it like the physical meaning of $|\psi|^2$ or $-i\hbar\frac{d}{dx}$. Wavefunction collapse falls into this area. The whole area of interpretations of quantum mechanics is a somewhat vexed one as it isn't clear how we'd ever prove which interpretation was correct.
While the physical meaning of $|\psi|^2$ is a precise question with a precise answer, the physical meaning of interpretations of quantum mechanics is very vague and ill defined and regarded by many of us as an excellent way of wasting time that could be put to better use.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Should zero be followed by units? Today at a teachers' seminar, one of the teachers asked for fun whether zero should be followed by units (e.g. 0 metres/second or 0 metre or 0 moles). This question became a hot topic, and some teachers were saying that, yes, it should be while others were saying that it shouldn't be under certain conditions. When I came home I tried to find the answer on the Internet, but I got nothing.
Should zero be followed by units?
EDIT For Reopening: My question is not just about whether there is a dimensional analysis justification for dropping the unit after a zero (as a positive answer to Is 0m dimensionless would imply), but whether and in which cases it is a good idea to do so. That you can in principle replace $0\:\mathrm{m}$ with $0$ doesn't mean that you should do so in all circumstances.
| Golly, in my opinion, when we are told to count it should be by starting from zero whereby no units need apply. Zero would have some meaning before being sent to work. We better state units for zero when it comes up in description of temperature: "temperature" is just a description of the units to follow such as Fahrenheit, Calvin, or absolute. Degrees with no descriptor can be implicit in the special case of minus 40 degrees for either of the drug-store applications. Get it?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72",
"answer_count": 10,
"answer_id": 9
} |
Where does the extra kinetic energy of the rocket come from? Consider a rocket in deep space with no external forces. Using the formula for linear kinetic energy
$$\text{KE} = mv^2/2$$
we find that adding $100\ \text{m/s}$ while initially travelling at $1000\ \text{m/s}$ will add a great deal more energy to the ship than adding $100 \ \text{m/s}$ while initially at rest:
$$(1100^2 - 1000^2) \frac{m}{2} \gg (100^2) \frac{m}{2}.$$
In both cases, the $\Delta v$ is the same, and is dependent on the mass of fuel used, hence the same mass and number of molecules is used in the combustion process to obtain this $\Delta v$.
So I'd wager the same quantity of chemical energy is converted to kinetic energy, yet I'm left with this seemingly unexplained $200,000\ \text{J/kg}$ more energy, and I'm clueless as to where it could have come from.
| Assume the rocket without fuel has weight $M$, the fuel has weight $m$, and the rocket engine works by sending the fuel instantaneously backwards with velocity $v_e$ relative to the initial velocity of the rocket. Thus, by conservation of momentum, the speed gain of the rocket is
$$
\Delta v_\text{rocket} = \frac{m}{M} v_e.
$$
The kinetic energy gain in the system in the COM reference frame is
$$
\Delta T = \frac{1}{2} M (\Delta v_\text{rocket})^2 + \frac{1}{2} m v_e^2.
$$
This is the chemical energy $E_\text{chemical}$ released by burning the fuel (assuming perfect efficiency).
Now what happens when we burn prograde, i.e. acclerate towards the direction of our velocity?
Let's assume that initially the fuel is in the rocket and they are in an orbit with orbital energy $E_0$, which is the sum of the kinetic energy and the potential energy,
$$
E_0 = T_0 + V_0 = \frac{1}{2} (M+m) v_0^2 - \frac{\gamma(M+m)}{r_0},
$$
where $v_0$ is the velocity of the rocket before the burn, $r_0$ is the distance of the rocket to the centre of the central body before the burn, and $\gamma$ is the gravitational parameter of the central body. Now $r_0$ is the parameter which we can choose by choosing when to burn, $E_0$ is a constant determined by our initial orbit, and $v_0$ is then a function of $E_0$ and our choice of $r_0$.
After the burn, the speed of the rocket is $v_0 + \Delta v_\text{rocket}$ and the orbital energy of the rocket is
$$
E_\text{rocket} = T_\text{rocket} + V_\text{rocket} = \frac{1}{2} M (v_0+\Delta v_\text{rocket})^2 - \frac{\gamma M}{r_0} = \frac{1}{2} M \left( v_0+\frac{m}{M} v_e \right)^2 - \frac{\gamma M}{r_0},
$$
and the speed of the fuel is $v_0 - v_e$ and the orbital energy of the fuel is
$$
E_\text{fuel} = T_\text{fuel} + V_\text{fuel} = \frac{1}{2} m (v_0- v_e)^2 - \frac{\gamma m}{r_0}.
$$
As you have seen, the Oberth effect is that the rocket ends with more kinetic energy if the burn is performed at higher $v_0$ and smaller $r_0$ (when keeping the $E_0$ constant).
The total potential energy remains the same, but the total kinetic energy changes, which results in a change in the total energy of the rocket and the fuel,
$$
(E_\text{rocket} + E_\text{fuel}) - E_0 = (T_\text{rocket} + T_\text{fuel}) - T_0 = \frac{1}{2} \frac{m^2}{M} v_e^2 + \frac{1}{2} m v_e^2 = \frac{1}{2} M (\Delta v_\text{rocket})^2 + \frac{1}{2} m v_e^2.
$$
This is the same no matter where the burn is performed! Also it is the same than it is in the initial reference frame of the rocket+fuel system, so it is the chemical energy $E_\text{chemical}$ used in the burn.
Now the question is, how does the energy gain of the rocket depend on the choice of when to burn (i.e. $r_0$, assuming $E_0$ is constant)?
The initial speed of the rocket+fuel system, $v_0$ is obtained in terms of $r_0$ as
$$
v_0 = \sqrt{2 \frac{E_0}{M+m} + \frac{2\gamma}{r_0}}.
$$
The kinetic energy gain of the rocket (not counting the fuel) when going from $v_0$ to $v + \Delta v_\text{rocket}$ is
$$
\begin{align*}
\Delta T_\text{rocket} &= \frac{1}{2} M ( v_0 + \Delta v_\text{rocket})^2 - \frac{1}{2} M v_0^2 = M v_0 \Delta v_\text{rocket} + \frac{1}{2} M (\Delta v_\text{rocket})^2 \\
&= M \Delta v_\text{rocket} \sqrt{2 \frac{E_0}{M+m} + \frac{2\gamma}{r_0}} + \frac{1}{2} M (\Delta v_\text{rocket})^2.
\end{align*}
$$
This formula is a bit complicated but, as you have seen, the gain is biggest when $r_0$ is smallest, that is, when the gravitational potential energy is smallest. Because the increase in the sum of kinetic energies of the rocket and the fuel doesn't depend on $r_0$, the mathematical explanation is that
the energy gain comes from the fact that the kinetic energy of the fuel decreases more:
$$
\Delta T_\text{fuel} = E_\text{chemical} - \Delta T_\text{rocket} = \frac{1}{2} m v_e^2 - M \Delta v_\text{rocket} \sqrt{2 \frac{E_0}{M+m} + \frac{2\gamma}{r_0}}.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 5,
"answer_id": 2
} |
In car driving, why does wheel slipping cause loss of control? When driving a car on ice, there is a danger of slipping, thereby losing control of the car.
I understand that slipping means that as the wheels rotate, their circumference covers a total distance larger than the actual distance traveled by the car. But why does that result in a loss of control?
| As all the answers have mentioned the reason for slippage is change of friction coefficient form higher to lower when going from static to dynamic friction. the reason for skidding and loss of control is two parts:
The physics of motion and the driver's over steering.
1- Tires have treads and indentations designed to impress the ice and make a shallow temporary microscopic grove to help traction and steering even in icy conditions.
They do this by slanted grooves on their treads which flex in a way to lead the car into turn smoothly. When the tire slips faster than the speed of car it grinds these imprints and skids off straight path and loses contact with the road. When the skid starts the suspension which was contracted under the dynamic loads gets free and expands suddenly in a jerk causing further instability and loss of authority of controls such as start of wild turns.
2- The driver not used to new low friction regiment thinks by over steering he could regain control but exacerbates the situation by plowing through any small imprint the tire has stablished and destroy weak traction starting to develop.
The best way to regain the control is take advantage of car's momentum, let go of accelerator pedal gently and don't steer momentarily and let the car stablish a straight track, then when assured of traction steer gently and carefully.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 2
} |
Is energy relative or absolute? Does gravity break the law of energy conservation? Imagine a meteor, with a mass of 1 kg, traveling towards the earth at a velocity of 1 mile/hr. It is having very little energy, as it can easily be brought to rest. Now as it enters earth's gravitational field, its velocity increases. Now it has very high potential energy, given by mass$\times$gravity$\times$height. So how did it gain this energy? How is total energy conserved here? Is energy relative or absolute?
| The statement that gravitational potential energy is $U=mgh$, with the height $h$ measured relative to some arbitrary vertical zero, is an approximation.
The potential energy associated with the gravitational interaction between two masses $M$ and $m$ is given by
$$
U(r) = -G \frac{Mm}{r},
$$
where $G$ is an empirical constant and $r$ is the separation between the two masses. The usual connection $\mathbf F = -\mathbf\nabla U$ between force and potential energy gives the usual inverse-squared force law.
This also has the nice feature that the interaction energy $U$ goes to zero if the distance between the two masses becomes very large.
Since only changes in potential energy are measurable (at least, in classical physics), having a negative gravitational potential energy everywhere is not a terrible flaw.
If you're near the surface of a planet with radius $R$, and your distance from the center of the planet changes by some height $h\ll R$, you can use the
binomial approximation
\begin{align}
(1 + \epsilon)^n
&= 1 + n\epsilon + \frac{ n (n-1)}{2!} \epsilon^2 + \cdots
\\&\approx 1 + n\epsilon
\end{align}
to find the change in the potential energy:
\begin{align}
U(R+h) &= -G\frac{Mm}{R+h}
\\ &= -G\frac{Mm}{R} \times \left(1+\frac hR\right)^{-1}
\\ &\approx -G\frac{Mm}{R} \times \left(1-\frac hR\right)
\\&= -\frac {GMm}{R} +
m \left( \frac{GM}{R^2} \right) h
\\ U(R+h) &\approx U(R) + mgh
\end{align}
The approximation fails if your height changes by a substantial fraction of Earth's radius, which seems to be part of your confusion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
How to get pressure from continuity equation for an incompressible fluid? The initial formulation of continuity equation for in-compressible fluids does not contain initially pressure.
$$\nabla \cdot \vec v = 0$$
I have seen, in some books it is assumed that pressure is calculated from continuity equation. How can we get such a relation from continuity equation which does not contain pressure initially?
The author calls such a formulation of continuity equation -from which we get pressure- as 'primitive variable formulation'.
| You solve two equations: continuity and Navier Stokes equation, to find two unknowns: velocity vector field and (scalar) pressure field. Solving only Navier Stokes equation gives you velocity field as a function of pressure. Then the pressure field must be such that the resulting velocity field satisfies continuity equation. This is what is meant when one says that pressure is to be found from continuity equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Could one see his hands in a dark moonless night in Sahara desert? If you were standing in a place on earth on a moonless night where there was no other light to be seen except starlight, could you see your hand held in front of your eyes?
| Most likely that you will see it, especially after your eyes are adapted to darkness. However, the main reason seems to be not the light from the stars but other sources. See here, for example:
http://www.skyandtelescope.com/astronomy-blogs/why-we-can-see-in-the-dark/
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Spin representation in 3D How do you represent $S_x$ and $S_y$ and $S_z$ as a 3D matrix? Can someone explain how $$\left[ J_x,J_y \right] = i\hbar\epsilon_{ijk}J_k,$$ comes out in 3D also? How does it relate to $S_x$ $S_y$ $S_z$? And how can I write $S_x$ and $S_y$ and $S_z$ in Dirac notation in 3D.
Here is the 3D matrix representation for $S_x$ and $S_y$ and $S_z$.
$$S_x=\frac{\hbar}{\sqrt{2}}
\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}
\\ S_y=\frac{\hbar}{\sqrt{2}}
\begin{bmatrix}0 & -i & 0 \\ i & 0 & -i \\ 0 & i & 0 \end{bmatrix}
\\ S_z=\hbar
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}
\\ H=\hbar^2
\begin{bmatrix}A & 0 & B \\ 0 & 0 & 0 \\ B & 0 & A \end{bmatrix}
$$
I have asked my teacher through the mail, he said I should use the relation $S_\pm = S_x \pm iS_y$, I think it relates to how to build the $H$ representation for a $$ H = A S^{2}_z + B(S_x^2 - S_y^2) $$
confused wether $B(S_x^2 - S_y^2)$ gives $B S_z$?
I don't know how I can apply this. I know how I do represent the $S_x$ $S_y$ $S_z$ in 2D. But I just don't know why.
| It sounds like what you're asking is: how do you construct a representation of SU(2) in terms of 3x3 matrices on a real 3-dimensional vector space? (This representation is also known as the "spin-1" representation, as it's used to describe the spin of spin-1 particles.)
The H you mention, which appears to be some kind of Hamiltonian, is irrelevant to the above question. I assume it is part of a longer homework question which isn't described fully here, so I'll ignore it.
As your teacher mentions, a simple way to construct $S_x$, $S_y$, and $S_z$ is to start with the raising and lowering operators $S_+$ and $S_-$.
If you work in the $S_z$ basis, then you know what the action of $S_z$ is on each of the 3 $S_z$ eigenstates:
$S_z \begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} = \hbar \begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}$
$S_z \begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} = 0$
$S_z \begin{bmatrix}0 \\ 0 \\ 1 \end{bmatrix} = -\hbar \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$
So the 3x3 matrix form of $S_z$ in this basis must be:
$S_z = \hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}$
And you also know the actions of the raising and lowering operators on these $S_z$ eigenstates (up to an undetermined constant):
$S_+$ $\begin{bmatrix}0 \\ 0 \\ 1 \end{bmatrix} = c\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix}$
$S_+$ $\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} = c\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix}$
$S_+$ $\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} = 0$
$S_-$ $\begin{bmatrix}0 \\ 0 \\ 1 \end{bmatrix} = 0$
$S_-$ $\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} = c\begin{bmatrix}0 \\ 0 \\ 1 \end{bmatrix}$
$S_-$ $\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} = c\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix}$
If you take those actions and write them in matrix form, you get:
$S_+$ = c$\begin{bmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{bmatrix}$
$S_-$ = c$\begin{bmatrix}0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}$
Then, you can write down $S_x$ and $S_y$ just by taking the right linear combinations of $S_+$ and $S_-$:
$S_x = \frac{1}{2}(S_+ + S_-)$
$S_y = \frac{1}{2i}(S_+ - S_-)$
The only final step required is to determine the constant c. This can be determined by finding the eigenvalues of the $S_x$ and $S_y$ matrices. You want them to be $-\hbar$, $0$, and $\hbar$. You can accomplish this by setting $c = \hbar\sqrt{2}$.
As for the commutation relations $[J_i,J_j] = i\hbar\epsilon_{ijk} J_k$, it just means that:
$[S_x,S_y] = i\hbar S_z$
$[S_y,S_z] = i\hbar S_x$
$[S_z,S_x] = i\hbar S_y$
You can verify these directly using matrix multiplication, for example by showing that $S_x S_y - S_y S_x = i\hbar S_z$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why are planets not crushed by gravity? Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time?
Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
| You must understand that there are two factors involved here, first one is gravity that is trying to bring the planet closer and crush it and the second factors tries to resist this crushing e.g. pauli exclusion principle leads to repulsion sometimes, nuclear reaction also resist crushing in stars . So this play of two different factors leads to crushing in some but not all cases.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 6,
"answer_id": 3
} |
Why doesn't the heat of the Earth's core diffuse to the surface? The Earth has a crust, mantle, outer core and the inner core with each one getting hotter than the next. How come, over millions and millions of years, the heat that is at the center of the Earth hasn't conducted throughout the planet's material so that the entire planet is one even temperature?
This always bothered me because we all learn that temperature diffuses from high areas to low areas, yet the Earth's center is super hot while if you dig a one foot hole, the ground feels quite cold. I never understood this. Thoughts?
| The pressure at the core is higher, so higher temperatures are thermodynamically more favourable there.
More importantly, the Earth is not in thermal equilibrium. Heat can't move outward from the core nearly so efficiently as from the surface off the planet, for example, so the surface cools a lot more quickly.
There are also mechanisms which continue to generate new heat deep underground, but not at the surface: friction from the motion of material under the surface, and decay of radioactive elements there.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
Can Hydrogen Fusion via CNO Cycle Occur in First Generation Stars CNO cycle requires the presence of carbon, nitrogen and oxygen to undergo hydrogen fusion. Does this mean that for first generation stars, no matter how big they are, can't undergo hydrogen fusion by CNO cycle because there is no carbon, nitrogen and oxygen present?
I can't seem to find any website which mentions this. All they say is if a star is large enough, it will undergo hydrogen fusion via CNO cycle, they don't specifically mention it having to be a second generation star.
| The CNO cycle does take place in the earliest massive stars, but only once a significant amount of helium has been burned into carbon by the triple alpha reaction.
Massive population III stars ($>20 M_{\odot}$) cannot be supported on the "main sequence" by pp hydrogen burning alone. What happens is that they collapse until their cores become hot enough to trigger the triple alpha reaction. This produces carbon and once this has reached an abundance, by number, of about $10^{-10}$ of hydrogen (about 6 orders of magnitude greater than the big-bang C abundance), then the more rapid CNO cycle becomes energetically important (e.g.Ekstrom et al. 2008; Yoon et al. 2012).
In less massive stars there just isn't enough carbon for the CNO cycle to release a significant amount of energy (compared with the pp chain), but they can be supported (as main sequence stars) by the pp chain with interior temperatures too low to produce carbon (e.g. Siess et al. 2002). The CNO cycle can take place in later stages of their evolution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is electric field lines away from (+) and toward (-)? I have a questions about the electric field lines.
Well in the basic learning, we know that:
*
*The electric field lines extend away from a positive charge
*They move forward a negative charge
Let's take parallel plates, which make a uniform electric field.If we take the basic learning, which I mentioned, in accounts, it's very easy to understand that this is the direction of a positive charge object if we put the object between the plate (move away the positive charged plate and toward negatively charged plate).
The problem is if we put a negative charged object between. Isn't the electric field reversed ?
| The direction of the field is defined to be the direction of the force on a positively charged test particle. Positive charges always move away from other +ve charges and towards -ve charges.
As @Charlie says, it is a convention, like driving on the right (or left), or which pin on a plug is "live". So that everyone can agree on the result of a calculation, we all have to define it the same way. It could be defined the other way round, but it isn't. And we can't have both - that would be confusing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Where do symmetries in atomic orbitals come from? It is well established that:
'In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit.
There are also many graphs describing this fact:
http://en.wikipedia.org/wiki/Electron:
(hydrogen atomic orbital - one electron)
In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point.
My question is: How do these symmetries shown in the above article occur?
What about the 'preferable' axis of symmetries? Why these?
|
How do these symmetries shown in the above article occur? What about the 'preferable' axis of symmetries? Why these?
For atoms subject to no net external electric of magnetic fields the orientation of the axes is arbitrary. This shows up clearly in the math because adding up all the spherical harmonic contributing to a single shell (1s, 2s, 2p, 3s, 3p, 3d, ...) gives no angular dependence. It doesn't show clearly in the visualization because those plots employ an arbitrary cut-off in generating the display. So, short answer, the lobes of the orbitals point along the coordinate axes purely for convenience: there is no physics content to that feature of the rendering.
The fact that there are a non-negative integer number of radial or angular nodes arises from the boundary conditions on the wave-function: just like the vibrations of a guitar string only those modes that 'fit' in the space exist as time-independent solutions.
In the case that there are external electromagnetic fields, then those fields do two things:
*
*They change the shape of the time-independent solutions
*The enforce a choice of orientation on the new solutions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
} |
Why is energy not conserved in this situation Suppose there are three masses that are still relative to each other in space. They are positioned in an equilateral triangle. Let's accelerate one mass towards the other two with a force. The energy added to this system should be $F\cdot{ds}$. However, according to the particle that has been accelerated, the work done is double this amount assuming that the three particles are of the same mass. I don't think that I fully understand how does the conservation of energy really works.
| Ok I think there are 2 distinct problems here. firstly it is that I cannot apply the same equations for energy in an accelerating coordinate system. It only works for inertial reference frames. Secondly it is that even under galilean transformations work done is not (and doesn't need to be) invariant which is what was addressed
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
BCS state and its superconductivity I've learned in BCS theory about its ground state by applying Bogoliubov annihilation operator on it to be zero; however, in the textbook the total momentum of electrons is set to be zero. It's okay to me for this state to be a ground state for the effective Hamiltonian; however, I cannot understand why this state exhibits superconductivity. I was considering yo apply perturbation say a constant electric field $E=U/L$ to the system and calculate some kind of linear response. However, I'm not sure about the results I derived so far.
| Now even though I haven't derive a concrete solution to what happens when applying external electric field to BCS superconductor, I eventually get an explanation of why gapped BCS states has relation with superconducting.
Considering the excitation energy spectrum as illustrated (gapped):
And then considering an impurity with some velocity to scatter with the superconductor. Because it is superconductor, no quasi-particle should be excited to consume energy or effectively enforce some kind of friction into the impurity. If the dispersion of impurity is classical, i.e., $E_{\text{imp}} = \dfrac{1}{2}m_{\text{imp}}{\bf v}^2$, and in/out with velocity ${\bf v}_{\text{in}}, {\bf v}_{\text{out}}$. For the conservation of energy & momentum, we have
$$\dfrac{1}{2}m_{\text{imp}}{\bf v}_{\text{in}}^2 = \dfrac{1}{2}m_{\text{imp}}{\bf v}_{\text{out}}^2 + E({\bf k}) $$
$$m_{\text{imp}}{\bf v}_{\text{out}} = m_{\text{imp}}{\bf v}_{\text{in}} - \hbar{\bf k}$$
square the second equation and divided my $2m_{\text{imp}}$, we have
$$\dfrac{1}{2}m_{\text{imp}}{\bf v}_{\text{out}}^2 = \dfrac{1}{2}m_{\text{imp}}{\bf v}_{\text{in}}^2 - {\bf v}_{\text{in}}\cdot\hbar{\bf k} + \dfrac{\hbar^2{\bf k}^2}{2m_{\text{imp}}}$$
compare with the first equation we have
$$ E({\bf k}) = {\bf v}_{\text{in}}\cdot\hbar{\bf k} - \dfrac{\hbar^2{\bf k}^2}{2m_{\text{imp}}}\le \hbar|{\bf v}_{\text{in}}||{\bf k}| $$
which set the lower bound for the incident scatter velocity, as follow:
therefore, at low energy range, as long as the system is gapped, the superconducting property remains at this level. The linear response calculation of current is somehow not correct, for it's completely a non-perturbative phenomenon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Standard Model Proton Decay Rate The electro-weak force is known to contain a chiral anomaly that breaks $B+L$ conservation. In other words, it allows for the sum of baryons and leptons to change, but still conserves the difference between the two. This means that the standard model could have a channel for protons to decay, for example into a pion and a positron. Does anyone know what the total proton decay rate through standard model channels is?
| Electroweak instantons violate baryon number (and lepton number) by three units (all three generations participate in the 't Hooft vertex). This is explained in 't Hooft's original paper. As a result, the proton is absolutely stable in the standard model. The lightest baryonic state that is unstable to decay into leptons is $^3$He. The deuteron is unstable with regard to decay into an anti-proton and leptons.
The rate is proportional to $[\exp(-8\pi^2/g_w^2)]^2$, which is much smaller than the rates for proton decay that have been discussed in extensions of the standard model. Note that the decay $^3\mathrm{He}\to$ leptons involves virtual $(b,t)$ quarks, and the rate contains extra powers of $g_w$ in the pre-exponent (which does not matter much, given that the exponent is already very big).
Just to give a rough number, the lifetime is a typical weak decay lifetime (say, $10^{-8}$ sec), multiplied by the instanton factor
$$
\tau = \tau_w \exp(16\pi^2/g_w^2)=\tau_w\exp(4\pi\cdot 137\cdot\sin^2\theta_W)
= \tau_w\cdot 10^{187}\sim 10^{180}\, sec
$$
where I have neglected many pre-exponetial factors which can be calculated, in principle, in the standard model.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Why can't we make Carnot heat engine in real life? Question is obvious: Why can't we make Carnot heat engine in real life?
I read Wikipedia and Fundamentals of Physics (Halliday) but I haven't found anything on my question. There are explanations about formulas and how it works but no obvious answer why it can't be made.
| A Carnot engine has to be perfectly reversible. This means zero friction, and perfect thermal conductivity between reservoirs*.
In practice neither of these things are possible so you will only ever get "close".
* As was pointed out by David White, reversibility requires zero temperature difference between the reservoirs; since the flow of heat is proportional to thermal gradient, an infinitesimal temperature difference implies infinitesimal heat flow, and infinite time per cycle; this is one more reason why the perfect heat engine is thermodynamically out of reach
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 9,
"answer_id": 0
} |
The dimensional analysis of the GR geodesic equation The geodesic equation parametrized by the proper time contains two terms:
$$
{d^{2}x^{\mu } \over ds^{2}}=-\Gamma ^{\mu }{}_{{\alpha \beta }}{dx^{\alpha } \over ds}{dx^{\beta } \over ds}\
$$
The dimensions of the different elements of the previous expresion are
$$
[x^{\mu }]=[s]
$$
Both have dimnesion of length. The metric tensor being dimensionless implies the dimensionless of the Christoffel symbols $ \Gamma ^{\mu }{}_{{\alpha \beta }}$ and consequently the left and right sides of the equation of the geodesic have different dimensions. What is wrong?
| The Christoffel symbols are obtained by differentiating with respect to $x^\alpha$, and since the metric is dimensionless if we write the dimensions we end up with:
$$ \left[{d^{2}x^{\mu } \over ds^{2}}\right] =\left[\frac{d}{dx^\alpha}\right]\left[{dx^{\alpha } \over ds}\right]\left[{dx^{\beta } \over ds}\right] $$
So both sides have dimensions of 1/length.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given that ice is less dense than water, why doesn't it sit completely atop water (rather than slightly submerged)? E.g.
*
*If we had a jar of marbles or something else of different densities and shook it, the most dense ones would go to the bottom and the less dense ones to the top.
(Image Source)
*If I put a cube of lead in water it would sink all the way to the bottom.
But for ice : what I am trying to understand is why doesn't the water (being denser than the ice) seek to reach the bottom, and the ice sit flat on top of it (as in the left image)? Instead, some part of the ice is submerged in the water (as in the right image), and some sits on top it.
| I'll try to explain this using some mathematics.
Let us have an ice cube floating in water. Let the density of water be $ \rho_1 $ and that of ice be $\rho_2$. Let the volume of the ice cube be $v$. Let the submerged volume be $v'$. If you consider the forces on the ice block:
$$ \rho_2 \cdot v \cdot g = \rho_1 \cdot v^\prime\cdot g $$
Cancelling g from both the sides,
$$ \rho_2\cdot v = \rho_1\cdot v^\prime$$
Now,
$$ v^\prime / v = \rho_2 / \rho_1 $$
Clearly, $\rho_2 < \rho_1$ . That means,
$$\begin{align} v^\prime / v &< 1 \\ \implies~~~~~~ v^\prime & < v\end{align}$$
This means that the submerged volume (v') is less than the cube's total volume. That also means that the cube will not fully submerge in water, provided no external force is applied.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58",
"answer_count": 8,
"answer_id": 6
} |
How can I calculate the improper integral appearing in the BCS gap equation for obtaining the critical temperature? To estimate the critical temperature of the BCS theory, when the gap is zero, one has the following improper integral:
$$\int_0^\infty \frac{\ln(x) }{\cosh^{2}(x)} dx $$
Many books and articles (including the original BCS article) just give the result, but do not show how to get it. How can I calculate it analytically ?
I have tried, but I can't get $\ln(\frac{4 e^{\gamma}}{\pi})$, instead I always get $\ln(\frac{8 e^{2 \gamma}}{\pi})$. I expanded $\frac{1}{\cosh^2(x)}$ as $4 \sum_{n=0}^{\infty}(-1)^{n} (n+1) e^{-2(n+1)x} $. I used the fact that $\int_0^\infty e^{-x}\ln(x)dx = -\gamma$, so I deduced that $\int_0^\infty e^{-ax}\ln(x)dx = -\frac{1}{a} (\gamma+\ln(a)) $. Then, I used $\sum_{n=0}^{\infty}(-1)^{n+1}\ln(n+1) = \frac{1}{2}\ln(\frac{\pi}{2})$. So, where am I wrong?
| Mathematica gives the result that you are trying to get, but with the opposite sign.
You are calculating sums of diverging series. This procedure requires accuracy. Looks like you can get the Mathematica's result if, say, you take the sum 1-1+1-1+... equal to 1/2 (Cesaro summation).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Deriving a formula for the moment of inertia of a pie slice of uniform density Say you have a right cylinder of radius $R$, and you take a pie slice of angle $\theta$ at the origin with mass $M$. How can you determine the moment of inertia?
My teacher says it is impossible to derive its moment of inertia given those two variables, but this problem was in our textbook.
| Assuming that the axis of rotation is the axis of symmetry of the cylinder, then the moment of inertia (MI) is the same as that of the cylinder which it came from, ie $\frac12 MR^2$ where $M$ is now the mass of the 'pie slice' rather than the mass of the 'whole pie' (= cylinder).
The explanation is the Stretch Rule, which says that the MI is the same if an object is stretched (or compressed) symmetrically along or around the axis of rotation. If every element of mass is kept at the same distance from the axis during any transformation, then there is no change in the MI.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is an instant of time? If we say that an instant of time has no duration, why does a sum of instants add up to something that has a duration? I have a hard time understanding this.
I think of one instant as being a 'moment' of time. Hence, the sum of many instants would make a finite time period (for example 10 minutes).
EDIT:
Since I got so many great answers, I was wondering, if someone can also give a illustrative example, besides the pure math ? I am just being curious...
| Perhaps it's useful here to differentiate between a specific time (as in, a one dimensional representation of a specific instant or location in time) and a duration, which is the measure of difference between two specific times.
In this case hat you refer to as a summable 'instant' may actually refer to a delta of duration, for example the Planck Time - named after physicist Max Plank, which is the amount of time it takes a photon to travel the Planck Length, which according to physlink.com is
roughly equal to $1.6 × 10^{-35}m$ or about $10^{-20}$ times the size of a proton
Which makes the duration of Planck Time equal to
roughly $10^{-44}$ seconds
I offer this explanation merely as an interesting aside to the already accepted answer - which I think probably better addresses your question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 6,
"answer_id": 0
} |
Where does gravitational energy come from? We've all heard mass tells space how to curve and curved space tells matter how to move. But where does the energy to curve space come from? Likewise where does the energy that curved space uses to push planets around come from? I mean if I tell my son to clean his room, and he does, then I did not provide him the energy to do so.
| Asking where energy comes from is like asking about the origin of the universe. Did it all spontaneously pop into existence out of nothing, completely violating every existing law of nature, or has it always been there just waiting to expand out of the singularity when god himself gave the command. The answer is we don't know. We may never know. It's our curiosity about the world that leads to the advancement of science. If we knew the secrets of the universe, we would never learn, and that isn't a life I want to live. There are always things we don't know, and that's why the universe is so amazing. The best answer I can give you is that the energy has just been there since the dawn of creation. What's before that is a mystery.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
How can voltmeter still measure potential difference if it has very large resistance? I am just confused how can a voltmeter which has a very large resistance and hence small current or in ideal condition zero current still measure potential different because as far as I know voltmeter is modified galvanometer and a galvanometer shows deflection only if current passes through it.
| Voltmeters come in many forms and as their implies they measure a difference in potential between two points.
One important characteristic of a voltmeter is that it does not alter the potential difference it is trying to measure and this usually means that its resistance is much higher than the resistance in the circuit where the potential difference originates.
For example if a current of $1$ mA is passing through a resistor of $1$ k$ \Omega$ then the potential difference across the resistor is $1$ volts.
Putting a voltmeter of resistance $1$ k$ \Omega$ across the resistor would mean that the current through the resistor would now be $0.5$ mA with the other half of the current passing through the voltmeter.
So the voltmeter reading would now be $0.5$ V.
However if the voltmeter had a resistance of $10$ M$ \Omega$ the volmeter would read $0.9999$ V because most of the current of $1$ mA would be flowing through the resistor and very little through the voltmeter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Phase transition in 2D Heisenberg model When we study the two-dimensional isotropic Heisenberg Model using Mean Field Theory or by Monte Carlo simulation we observe a phase transition at a temperature not equal to zero. This is opposed to Mermin Wagner theorem.
Interestingly this ordering happens only in z direction. Can anybody explain why we observe such a transition?
| If you simulate with angles you shouldn't take theta and phi uniform random numbers. In here
http://mathworld.wolfram.com/SpherePointPicking.html
explained why. If you take random theta and phi uniformly, you oversampled in poles and your spins are a lot more up an down (like Ising).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Constancy of temperature in a closed system Consider a thermodynamical classical isolated system, made by a small subsystem and a way large reservoir. The two could exchange heat.
Usually in such situation we say that the system is closed or is a $(N,V,T)$ system.
What perplexes me is that for the $(N,V,T)$ system, $N$ and $V$ are constant even if the system+reservoir are not yet at equlibrium. But that's not true for the temperature $T$.
I know that the larger reservoir imposes its temperature to the system. But this needs a step more.
Please, where I am wrong?
| The thermodynamic parameters $N, V, T$ are all mentioned at the equilibrium states only. So, when your system is in contact with a heat bath, there causes an exchange of energy between the system. According to the definition of a heat reservoir, its temperature is not affected by any slight exchange of heat. Hence the system will eventually come in thermal equilibrium at a temperature $T$ of the heat bath.
The system is closed in this sense means that the combined system of system+reservoir is isolated from the rest of the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solenoidal electric field In electrostatic electric field in a system is always irrotational ∇×E=0. And divergence of electric field is non zero ∇.E=ρ/ε but in some cases divergence of electric field is also zero ∇.E=0 such as in case of dipole I had calculated and got that ∇.E=0 for a dipole
So in case of this dipole divergence and curl both are zero
So what does it mean when a vector fieLd do not diverge and not rotational at all
So what kind of nature it has??
∇×E=0 , ∇.E=0.
So it means the electric field is both solenoidal and irrotational ,but how can these two conditions satisfy simultaneously? If a vector field is solenoidal then it has to rotate ,must have some curliness
But in pic of a dipole I can see that the electric field is bending or rotating
Then what does it mean about zero curl (∇×E=0)?
I can see the electric field is rotational
| For better understanding of an irrotational and rotational field I am attaching two video links about Vorticity(Vorticity is the curl of velocity of fluid flow) which cleared my concept to a good extent. The term here to emphasise is CURL and not rotation, using the word rotation gives the vague sense of it but curl actually is the more technically correct term.
*
*Vorticity part 1
*Vorticity part 2
Two points to understand crystal clear from link 1 are-
a. A clear straight stream of water in a laminar flow is having rotationality(precisely Curl of Velocity is not zero), even though it appears to be just flowing in a straight line.
b. A spinning tight vortex with a hole at the centre of the basin has zero rotationality(precisely Curl of Velocity is zero.) even though it appears to spin quite nicely.
Hence, curling or say circulation is a better term to understand this whole phenomenon.
You can use this online vector field visualiser and plot functions like xi-yj, xj or xi+yj to understand rotational and solenoidal vector fields.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why does the Pauli exclusion principle not apply to bosons? The Pauli exclusion principle states that two fermions cannot have the same quantum state simultaneously, but why does this not apply to bosons with whole integer spins?
| This is a legitimate question but one for which you probably won't get any real, satisfying answer rather than just "because that's how nature works".
You can "derive" the impossibility for two fermions to have the same quantum numbers from the requirement for many-fermion states to be antisymmetric with respect to the exchange of any two particles, that is,
$ \lvert \psi_1 \psi_2 \rangle = - \lvert \psi_2 \psi_1 \rangle,$
and show that there is a connection, given by the spin-statistics theorem, between spin and symmetry of the wavefunction, so that half-integer spin particles must be antisymmetric like in the above case.
But then again,
this is not really an answer to the "why" question, as it is just an equivalent way to formulate the exclusion principle.
Said in other words, there are no underlying or "deeper" principles or theories that can "explain" Pauli's principle from other more foundamental assumptions (yet?).
When in physics you start asking a "why" question (like, why do magnets attract each others?), eventually you will inevitably find yourself in this situation, where the only possible answer you are left with is: "because that's how things work".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Line integral of a vector potential From the theory of electromagnetism, the line integral $\int {\bf A}\cdot{d{\bf s}}$ is independent of paths, that is, it is dependent only on the endpoints, as long as the loop formed by pair of different paths does not enclose a magnetic flux.
Why is this true?
| It's a straightforward application of Stokes' theorem:
Given two paths $\gamma_1,\gamma_2$ with the same starting and end points, let $\gamma := \gamma_1 - \gamma_2$ be the loop obtained by going from the starting point along $\gamma_1$ to the end point, and then in the reverse direction along $\gamma_2$. Let $S$ be a surface filling $\gamma$, i.e. such that its boundary is $\gamma$.
Then we have that
$$ \int_{\gamma_1} A - \int_{\gamma_2} A = \int_\gamma A = \int_{\partial S} A = \int_S \mathrm{d}A$$
and in vector notation $\mathrm{d}A$ is $\nabla\times A = B$. But $\int_S B$ is just the magnetic flux through $S$, so if there's no flux enclosed by $\gamma$, the two integrals along the paths are equal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Are there any recent experiments demonstrating retrocausality? I was wondering if there are any recent experiments outside of the typical quantum mechanics single particle realm that demonstrate that retrocausality is more than pseudo-science?
| Probably not what you have in mind, because it's hard to interpret quantum experiments without making additional assumptions. The entanglement experiments that violate Bell-inequalities can be interpreted as evidence for either retrocausality or nonlocality. If you assume a local reality in space and time, then they are experimental evidence for retrocausality. If you assume no-retrocausality (as is almost always done), those same experiments are evidence for nonlocality.
There's also a relevant recent paper by two very-highly-regarded people in quantum foundations: https://arxiv.org/abs/1607.07871 . They show that even single particle experiments, combined with a certain reasonable assumption about time-symmetry, imply retrocausality.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Different hydrostatic pressure on sphere I want to ask a question that I can't answer it for about 1 year. Does the sphere rotate if we have the sphere or a cylinder that has an point of rotation in the center of the shape holding by a rotatable rod when the liquid no.1 and no.2 is water with different height as shown in the image? And if it doesn't rotate with similar two liquids, does it rotates with two different liquid types? Does it rotate forever or not?!!
| I think that it will never rotate whatever you do with the different liquids because the different hydrostatic pressure forces along the circumference of the cylinder are all directed towards the axis so that no torque necessary for rotation can result.
An additional argument against rotation is conservation of energy and also the impossibility of a perpetuum mobile of the first kind. The liquid levels in the device do not change. Where should the energy for rotation come from?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is a logarithmic divergence? I am reading about renormalisation in QED and I come across the term logarithmic divergence several times. Can somebody explain to me about it in simple terms?
| The term 'logarithmic divergence' is normally used for integrals of the type
$$
F(x) = \int_{x_0}^x \frac{1}{\xi}\mathrm d\xi
$$
(or possibly of the form $F(x) = \int_{x_0}^x \frac{1}{\xi}f(\xi)\mathrm d\xi$ where $f(\xi)$ approaches some finite limit when $\xi\to\infty$). In these cases, the integral diverges to infinity when $x\to\infty$, but it does this relatively slowly: in fact, as a logarithm, since
$$
F(x) \approx \log(x)
$$
for the finite cas (or $F(x)\approx F_0 \log(x)+\mathrm{regular}(x)$ if a non-constant $f$ is introduced).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How do one show that the Pauli Matrices together with the Unit matrix form a basis in the space of complex 2 x 2 matrices? In other words, show that a complex 2 x 2 Matrix can in a unique way be written as
$$
M = \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z
$$
If$$M = \Big(\begin{matrix}
m_{11} & m_{12} \\
m_{21} & m_{22}
\end{matrix}\Big)= \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$
I get the following equations
$$
m_{11}=\lambda_0+\lambda_3 \\ m_{12}=\lambda_1-i\lambda_2 \\ m_{21}=\lambda_1+i\lambda_2 \\ m_{22}=\lambda_0-\lambda_3
$$
| And yet another answer.
Pauli matrices $\sigma_1,\sigma_2$ and $\sigma_3$ evidently form a base of the 3-dimensional real vector space of the 2 by 2 traceless Hermitian matrices. Since every Hermitian matrix is the sum of a traceless Hermitian matrix and the real multiple of the identity matrix, $\sigma_1\sigma_2,\sigma_3$ an $I$ together form a base of the 4-dimensional real vector space of the 2 by 2 Hermitian matrices.
Since every complex 2 by 2 matrix can be decomposed to the sum of a Hermitian and an anti-Hermitian matrix, regarding that, $M$ is hermitian if and only if $iM$ is anti-Hermitian, every complex 2 by 2 matrix $M$ can be written as $M=A+iB$ where both $A$ and $B$ is Hermitian. So, there are some unique real numbers $a_0,a_1,a_2,a_3,b_0,b_1,b_2, b_3$ so that $A=a_0I+\sum_i{a_i\sigma_i}$ and $B=b_0I+\sum_i{b_i\sigma_i}$ hence $M=(a_0+ib_0)I+\sum_i(a_i+ib_i)\sigma_i$ that is, the Pauli matrices and $I$ together expand the complex vector space of the 2 by 2 complex matrices. Since the (complex) dimension of this vector space is 4, they form a base.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
Is acceleration continuous? The extrapolation of this Phys.SE post.
It's obvious to me that velocity can't be discontinuous, as nothing can have infinite acceleration.
And it seems pretty likely that acceleration can't be discontinuous either - that jerk must also be finite.
All 4 fundamental forces are functions of distance so as the thing exerting the force approaches, the acceleration must gradually increase (even if that approach/increase is at an atomic, or sub-atomic level)
e.g. in a Newton's Cradle, the acceleration is still electro magnetic repulsion to it's a function of distance, so it's not changing instantaneously, however much we perceive the contact to be instantaneous. (Even if we ignored the non-rigidity of objects.)
Equally I suspect that a force can't truly "appear" at a fixed level. Suppose you switch on an electromagnet, if you take the scale down far enough, does the strength of the EM field "build up" from 0 to (not-0) continuously? or does it appear at the expected force?
Assuming I'm right, and acceleration is continuous, then jump straight to the infinite level of extrapolation ...
Is motion mathematically smooth?
Smooth: Smoothness: Being infinitely differentiable at all point.
| With respect, I think you're splitting hairs.
Before the wall, the velocity is constant, and a = 0. After the wall, the velocity is constant at 0, and a = 0. In between velocity is decreasing to 0, acceleration is the derivative of the velocity as a function of time, at some non-constant value of un-changing sign (since it's either constantly decelerating, or accelerating), so how does it start at 0, end at 0, and have some value of unchanging sign in-between without being discontinuous?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Can Newton's laws of motion be proved (mathematically or analytically) or are they just axioms? Today I was watching Professor Walter Lewin's lecture on Newton's laws of motion. While defining Newton's first, second and third law he asked "Can Newton's laws of motion be proved?" and according to him the answer was NO!
He said that these laws are in agreement with nature and experiments follow these laws whenever done. You will find that these laws are always obeyed (to an extent). You can certainly say that a ball moving with constant velocity on a frictionless surface will never stop unless you apply some force on it, yet you cannot prove it.
My question is that if Newton's laws of motion can't be proved then what about those proofs which we do in high school (see this, this)?
I tried to get the answer from previously asked question on this site but unfortunately none of the answers are what I am hoping to get. Finally, the question I'm asking is: Can Newton's laws of motion be proved?
| If you want to prove something, you have to start with axioms that are presumed to be true. What would you choose to be the axioms in this case?
Newton's Laws are in effect the axioms, chosen (as others have pointed out) because their predictions agree with experience. It's undoubtedly possible to prove Newton's Laws starting from a different set of axioms, but that just kicks the can down the road.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 12,
"answer_id": 1
} |
Dielectric material problem I read about dielectric just 2 days ago and come across something about what polarization means: how a neutral object can be created to be a dielectric under external electric field, etc.
Then I read about the electric field in a dielectric material and there I found two terms: one is surface polarization charge density ($\sigma$) and the other is volume polarization charge density ($\rho$), where, $\sigma=\mathbf{p}\cdot\mathbf{n}$ and $\rho=-\nabla\cdot\mathbf{p}$, with $\mathbf{p}$ being the polarization vector.
However, I don't understand why they are called surface and volume charge densities, respectively? How does $\nabla\cdot\mathbf{p}$ give a volume charge density and why does a volume charge exist when $\nabla\cdot\mathbf{p}\neq 0$? What is the relationship between the volume charge density and the fact that the polarization vector is a diverging vector?
Please provide a schematic answer.
| One way of thinking about the divergence of a vector field is that it is the flux of that vector quantity in or out of a unit volume.
I think what you are calling ${\bf p}$ is the polarisation field, which is the electric dipole moment per unit volume, with units of charge times length. If you take the divergence of this, you calculate the flux of this quantity, which is a dipole moment multiplied by a closed area divided by a volume, per unit volume. This yields a charge per unit volume.
Whether you have a volume polarisation charge of course depends on the form of ${\bf p}$. If it is uniform, then clearly there is no net volume charge density. The dipole charges are separated, but there is no net charge density anywhere in the volume. A volume charge density will only arise through discontinuities in ${\bf p}$ (perhaps through discontinuities in $\epsilon$). Lines of ${\bf p}$ must begin or end on polarisation charges, but if the divergence is zero then there is no net polarisation charge and there are just as many field lines beginning as there are ending in any considered small volume.
Surface polarisation charge density arises because at some point you reach the surface of the medium and you "expose" one end of all the polarisation dipoles at the surface. If you design a Gaussian surface that cuts through a set of dipoles just inside the surface of the medium, then the total charge enclosed is $\oint {\bf p}\cdot {\bf n}\ dS = \oint \sigma\ dS$, where ${\bf n}$ is a normal unit vector to the surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Regarding the Dirac Hamiltonian's use of summation notation: Einstein summation notation, as I understand it:
By writing $A_i B^i$ one implicitly means a sum over elements of the rank 1 tensors A and B. The key is the contraction of an "up" and a "down" index. In a formalism where a metric raises/lowers we should see this as an inner product, where the metric encodes the information of how an inner product is taken in such a space.
This notation is convenient tool that I employ on a daily basis. However, the Hamiltonian for the Dirac Equation of QFT fame can be written:
$$
H = \alpha_i p_i + \beta m
$$
Two down indices? Summed together?
Now, in this case we're considering a flat Minkowski space-time with $i$ summing over just the spacial indices. As such, we can raise and lower the indices for "free" with a Euclidean metric.
Is this not an abuse of notation? This is not Einstein's summation convention but instead a bastardised summation notation in which we just do not write summation symbols?
Surely it would be more explicit to write:
$$
H = \alpha_i p^i + \beta m
$$
Am I right here? Am I having some kind of mathematical mental breakdown? Both?
| As it was pointed out, we are in Euclidean space, so the metric is the unit matrix $I$, if you are in Minkowski space, similar things hold with $g_{\mu\nu} =\pm(+,-,-,-)I$ (don't care about the sign as long as you stick to one convention). There can be other metrics in general relativity, but as long as you are not in GR, you can ignore the position of the indices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Magnetic quantum number for d orbitals Online one can find many pictures of d-orbitals.
I know that these states correspond to :
n = 3, l = 2, m = -2, ...,2
but I don't know which one is which and I couldn't find a clear asignment anywhere. What are the magnetic quantum numbers for each of the above displayed states?
I feel like I should be able to find this everywhere, but I couldn't find any explicit statement.
| I got this one. Does this help ?
The subscript of d represents the m value.
Link :http://study.com/academy/lesson/electron-orbital-definition-shells-shapes.html
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Color confinement and integer electric charge? Quarks have electric charges proportional to one third of the elementary electric charge, but both mesons and baryons have integer electric charge.
Is there some deep explanation from a more fundamental and generalizable conservation law that any state that forms an $SU(3)$ singlet must have an integer electric charge? Are there any common beyond-the-standard-models (BSM) where this does not hold?
I can show this for systems with just quarks & gluons, since quarks carry -1 electric charge quanta (1/3 of elementary charge) mod 3, antiquarks carry 1 electric charge quanta mod 3, and gluons carry 0 electric charge quanta mod 3 (same as a quark + antiquark pair), which implies that for any $SU(3)$ representation $D(p,q)$ composed of these, we must have that the electric charge mod 3 is equal to $(p-q)$ mod 3. So an integer Baryon number implies an integer electric charge (in terms of elementary charges). The same derivation also works for the U(1) weak hypercharge.
But this derivation just uses the explicit list of fundamental particle in the standard model with tables of their electric & color charges, and doesn't necessarily put it into any wider context. Is this just a coincidence in the standard model? Or is it a property of some GUTs as well? It feels like a really crazy coincidence that the SU(3) and the U(1)xSU(2) sectors are related to each other in this way.
| It may help noticing that the normalization of the $U(1)$ charge is arbitrary. The only meaningful information that one can have is the ratios
of the charges of the particles in the model (2:(-1) for the case of the quarks). They can always be defined to be all integer numbers.
If you want to know why the charge of the color singlets is an integer number times the electron charge (which has apparently no relation with
the color sector) then your question is more about an explanation for
the ratios of charges between quarks and leptons. In that case, you may
find useful these other two questions (the first one is mentioned in the comments):
Is there an explanation for the 3:2:1 ratio between the electron, up and down quark electric charges?
What's the deepest reason why QCD bound states have integer charge?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there oscillating charge in a hydrogen atom? In another post, I claimed that there was obviously an oscillating charge in a hydrogen atom when you took the superposition of a 1s and a 2p state. One of the respected members of this community (John Rennie) challenged me on this, saying:
Why do you say there is an oscillating charge distribution for a hydrogen atom in a superposition of 1s and 2p states? I don't see what is doing the oscillating.
Am I the only one that sees an oscillating charge? Or is John Rennie missing something here? I'd like to know what people think.
| The superposition of eigenstates in a hydrogen atom results in an oscillating wave function in time with a frequency corresponding to the difference of energies of the eigenstates. Schrödinger considered for a time the wave function squares as charge density which resulted in an oscillating charge distribution. As this corresponded to an electric dipole oscillation and also explained intensities and polarization of observed light emission, he assumed heuristically that this interpretation explained the origin of light emission. See E. Schrödinger "Collected Papers on Wave Mechanics ", Blackie & Son Ltd., London and Glasgow 1928
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 5,
"answer_id": 3
} |
What is the role of pillars in bridges?
As I can see in the picture, there are so many pillars which are holding the bridge. This picture gave a question to me that what are these pillars doing below the bridge?? An appripriate answer could be "these are providing support to bridge".
I tried to get the answer as follows:
In the first image there are two pillars holding a bridge of mass $M$, since gravitaional force is acting downwards thus pillars are bearing a force of $\frac{1}{2}Mg$.
In the second image there are four pillars bearing a force of $\frac{1}{4}Mg$. I'm assuming that mass of bridge is uniformly distributed and each pillar is bearing an equal amount of the load.
Now the question is that since the pillars are bearing the force, so if we make strong enough pillars to bear a large force then there will be no need of so many pillars.
But that is not the case, we see a large number of pillars holding a bridge. What is wrong with the work I did? Shouldn't the number of pillars depend upon the strength of the pillars we make rather than the length of the bridge ??
I shall be thankful if you can provide more information about this topic.
| Not only do the pillars need to bear the weight of the bridge but the bridge itself also needs to bear it's own weight (i.e. Not snap). For this reason, lots of pillars can be used to support the bridge in more places, stopping this from happening.
Further, the more pillars, the less weight each pillar holds itself. If every pillar supported the maximum weight it could support then if one was to fail, or something happened to jeopardise the integrity of the pillar, the bridge may fail. Having lots of pillars reduces this risk.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 5
} |
If an electron is in ground state, why can't it lose any more energy? As far as I know, an electron can't go below what is known as the ground state, which has an energy of -13.6 eV, but why can't it lose any more energy? is there a deeper explanation or is this supposed to be accepted the way it is?
| The existence of a minimum energy follows from the wave-like aspects of matter.
The allowed values of energy are those corresponding to stationary wave states, so the trite answer to your question is that where you have a set of allowed energy values one of them has to be a minimum.
To get some physical insight into why the minimum level ends up where it is, without resorting to maths, you can picture a semi-classical explanation as follows. The classical treatment would say that the electron could lose potential energy by spiralling closer to the nucleus. But the electron has an associated wavelength, and the lower its energy becomes the longer its wavelength becomes. If the electron is going to be in some standing state its orbit needs to be at least one wavelength long, so the lower its energy, the longer its wavelength and the more widely spaced its orbit. The elongation of the wavelength, through the loss of energy, ensures that the electron can't get any close to the nucleus and thus can't lose any more energy, so you arrive at a natural minimum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
} |
Necessary and sufficient condition for Bernoulli's theorem For an ideal fluid, if the vorticity is $\vec{\omega}=\nabla \times \vec{v}$, then Euler's equations can be rewritten as:
$$\rho \dot{v}_i = \rho \epsilon_{ijk} v_j \omega_k - \frac{1}{2} \rho \partial_i v^2 - \partial_i p $$
Any textbook will then tell you that if you have a steady flow with zero vorticity:
$$ \frac{1}{2} \rho \partial_i v^2 + \partial_i p = 0 $$
which is a differential form of Bernoulli's theorem. However as it is obvious from the previous equation the necessary and sufficient condition for this equation to hold is not a steady flow with $\vec{\omega}=0$ but a steady flow with $\vec{v} \times \vec{\omega}=0$, which is a more general condition. I am wondering if one can give a nice geometric interpretation of this condition. In other worlds, what is the geometric interpretation of a vector field having $\vec{v} \times (\nabla \times \vec{v}) = 0$?
| The actual derivation of the Bernoulli equation comes from the vorticity form of the incompressible Navier-Stokes equation. In terms of vorticity, the Navier-Stokes equation take the form,
$$ \frac{\partial \vec{V}}{\partial t} + \vec{\omega} \times \vec{V} = -\nabla\left(\frac{p}{\rho} + \frac{|\vec{V}|^2}{2} + k\right) + \nu \cdot \left(\nabla \times \vec{\omega}\right)$$
Now if we have steady flow, $\frac{\partial \vec{V}}{\partial t} = 0$, and if we further assume the flow is inviscid, then the equation reduces to,
$$ \vec{\omega} \times \vec{V} = -\nabla\left(\frac{p}{\rho} + \frac{|\vec{V}|^2}{2} + k\right)$$
Obviously, if the flow is irrotational, namely $\vec{\omega} = \nabla \times \vec{V} = 0$, then we are left with,
$$ \nabla\left(\frac{p}{\rho} + \frac{|\vec{V}|^2}{2} + k\right) = 0 $$
or equivalently,
$$ \frac{p}{\rho} + \frac{|\vec{V}|^2}{2} + k = \textrm{constant}$$
This is the most famous form of the Bernoulli equation, which requires steady, incompressible, inviscid, and irrotational flow. Also, an important note on this relation, because the flow is irrotational, the Bernoulli equation can be applied across streamlines. Now for the case you specified, for instance, what if the flow is rotational? Well, you have to consider the direction of the vector quantity $\vec{\omega} \times \vec{V}$. The resulting vector of $\vec{\omega} \times \vec{V}$ is orthogonal to the velocity and vorticity vector. Therefore, along a streamline the quantity $\vec{\omega} \times \vec{V} = 0$. Hence, the resulting equation becomes,
$$ \frac{p}{\rho} + \frac{|\vec{V}|^2}{2} + k = \mathrm{constant\big|_{streamline}}$$
Therefore, the conclusion is that the Bernoulli equation can be applied across streamlines if we have steady, incompressible, inviscid, and irrotational flow ($\vec{\omega} = \nabla \times \vec{V} = 0$). However, if the flow is rotational ($\vec{\omega} = \nabla \times \vec{V} \neq 0$), we can only apply the Bernoulli equation along a streamline.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Beta decay form factors I'm trying to understand formula from Anthony Zee qft in nutshell for beta decay:
$$\langle p'|{J_5}^{\mu} |p\rangle = \bar u(p')\left[\gamma^\mu\gamma^5F(q^2) +q^\mu\gamma^5G(q^2)\right]u(p)$$
it is stated, that term with $$(p' + p)^\mu \gamma^5 A(q^2)$$
is missing because of some charge and isospin symmetry.
This is not clear because weak interactions should violate isospin and at the same time if one applies charge conjunction$$C=i \gamma^0\gamma^2$$ to both $u(p)$ and $\bar u(p')$ the current stays invariant and thus $A(q)$ doesn't have to be zero.
How can this be solved?
In many other books term $$\sigma^{\mu\nu} q^\nu$$ is used instead. And in this case it is clear, that this term makes the corresponding part of the current with positive G-parity, and it explains everything. Why G-parity for terms with $$(p' + p)^\mu$$ and $$(p' - p)^\mu$$ is different is completely unclear.
| it seems, that one can use one of the Gordon identities:
$$0 = \bar u(p')\left[(p' + p)^\mu\gamma^5 +iq_\nu\sigma^{\mu\nu}\gamma^5\right]u(p)$$
to replace $(p' + p)^{\mu}$ with $q_\nu\sigma^{\mu\nu}\gamma^5$, which would correspond to positive G-parity term of the current, while only negative are allowed for this reaction .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/294560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What does antimatter look like? I have seen simulations of antimatter on TV. Has antimatter ever been photographed?
| Antimatter looks just like matter. Experimentally, there is no difference between the spectral lines of antihydrogen and of ordinary hydrogen. Same emission spectrum.
The photon is its own antiparticle. It interacts in the same way with matter as with antimatter.
PS: Very recent Nature article by Ahmadi et al gives an upper bound of $2.10^{-10}$: http://www.nature.com/nature/journal/vaap/ncurrent/pdf/nature21040.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/294966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63",
"answer_count": 6,
"answer_id": 1
} |
Moving camera and special relativity Consider a rigid inertial coordinate system $K$.
A photo camera is located on the point $O=(0,0,0)$ and in $t=0$ it captures a picture from the light rays of the physical objects at rest wrt $K$. Let's suppose that the capture takes an infinitesimal time.
Consider the two cases:
*
*at $t=0$ the camera is in $O$ with zero velocity wrt $K$
*at $t=0$ the camera is in $O$ with non-zero velocity wrt $K$ in the direction where the camera is pointing.
Questions:
According to special relativity do we expect to find differences between the photos in case 1 and 2? What kind of differences? (Size of the objects? Angle of view?) How can these differences be explained in terms of the dynamics of light rays going inside the camera?
| *
*Light will be red- or blueshifted due to the Doppler effect. https://en.wikipedia.org/wiki/Redshift#Doppler_effect
*The size of the objects in the direction in which the camera is moving will be affected by length contraction, and the shape will also be affected if we consider where light has to come from to reach the camera when the picture is taken. As @CR pointed out in the comment below, spheres will still look circular, but other objects will have different apparent shapes. Straight lines will look curved. The detailed analysis here depends on the exact shape of the object. Some good references appear to be:
*
*http://scitation.aip.org/content/aapt/journal/ajp/29/5/10.1119/1.1937751 [more math]
*http://www.spacetimetravel.org/fussball/fussball.html [more pictures]
Both these references deal with moving objects and a stationary camera, but that is of course equivalent to stationary objects and a moving camera.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Confusion between two different definitions of work? I'm doing physics at high school for the first time this year. My teacher asked us this question: if a box is slowly raised from the ground to 1m, how much work was done? (the system is only the box)
Using the standard definition, $W = Fd\cos(\theta)$, the work should be 0, because the sum of the forces, the force due to gravity and the force of the person, is 0.
However, using the other definition he gave us, $W = \Delta E$, work is nonzero. $\Delta E = E_f - E_i$ , so that would be the box's gravitational potential energy minus zero.
My teacher might have figured it out but class ended. Does anyone have any insight?
| For a system with no internal degree of freedom (such as a point mass), work is equal to the change in kinetic energy:
$$W=\int_{\mathcal L} \vec F \cdot d \vec x = \Delta E_k$$
The $\vec F$ in the above equation is the net force. This is a very important point.
Let's model our box as a point mass. At the initial time, the box is still ($E_k^i=0$), and at the final time, it is again still ($E_k^f=0$). So,
$$W=\Delta E_k=E_k^f-E_k^i=0$$
Total work is $0$.
But, you can also ask yourself what is the work done by a single one of the two forces. The work done by gravity is
$$W_g=m\int_{\mathcal L}\vec g \cdot d \vec x =- m g \Delta z$$
where $\Delta z$ is the vertical displacement.
Or you could compute the work $W_a$ done by your arm, which is lifting the box. Since total work $W$ is $0$, we have
$$W_a=-W_g=mg\Delta z$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 3
} |
Why do excited states decay if they are eigenstates of Hamiltonian and should not change in time? Quantum mechanics says that if a system is in an eigenstate of the Hamiltonian, then the state ket representing the system will not evolve with time. So if the electron is in, say, the first excited state then why does it change its state and relax to the ground state (since it was in a Hamiltonian eigenstate it should not change with time)?
| The hydrogen atom in an excited state is not really in an energy eigenstate.
There are two ways of looking at it. One way is to recognize that the atom is not isolated. It is always coupled to the electromagnetic field. Even if field itself is in the ground state, there are "zero-point" fluctuations in the field amplitude. Thus, the atom is always feeling the influence of an external field. The zero-point fluctuations have components at all frequencies, including the atomic transition frequency. So the spontaneous decay of an excited atom can be thought of as stimulated emission due to the zero-point fluctuations.
The second way of looking at it is to take the system of interest to be the atom and the electromagnetic field. In this case, the state with no excitation of the field, and the atom in an excited state is not an energy eigenstate. The total wave function amplitude will start entirely atomic, but evolve to include field excitation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 3,
"answer_id": 2
} |
What is "quantum" in topological insulators? When I'm looking at descriptions of topological insulators, (non interacting just in case anybody ascribes interactions), I'm essentially looking at single particle quantum mechanics on a lattice.
What made quantum mechanics special to me was entanglement and measurement. If I were to force myself to only look at observables statistically (only expectation values) I would only be left with entanglement, hence non interacting systems(in this case the topological insulator) should have classical analogues, at least according to my reasoning.
Experimentally there are examples of photonic[1] and acoustic[2] topological insulators which support my arguments
If my understanding of quantum mechanics is correct, is there anything specifically special about what we understand of non-interacting Symmetry Protected Topological Phases (including the classification, etc) that would not be observed classically ? If it isn't what differentiates the materials called photonic and acoustic topological insulators from the quantum case?
[1] Khanikaev, Alexander B., et al. "Photonic Analogue of Two-dimensional Topological Insulators and Helical One-Way Edge Transport in Bi-Anisotropic Metamaterials." arXiv preprint arXiv:1204.5700 (2012).
[2] He, Cheng, et al. "Acoustic topological insulator and robust one-way sound transport." arXiv preprint arXiv:1512.03273 (2015).
| Perhaps what makes these systems quantum is the fact that they must be Fermionic, and one must use the Pauli exclusion principle (an inherently quantum phenomenon) in order to fill the Fermi sea and get the Fermi projection. If it weren't for quantum mechanics, you wouldn't have such a Fermi projection and hence no topology. So in this sense you are taking interactions into account in the most primitive way: filling the Fermi sea.
The classical systems you cite are mere analogues of these quantum systems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Will there be translation + rotational motion? If a rod is on a frictionless plane, and a force is applied on one of it's end, will there be both, translation + rotation motion?
Also, if only a single force is applied on a body that does not pass through Centre of Mass, will it always produce rotation + translation?
| Suppose that you have a force $\vec F$ acting at a point $A$ on the rod as shown in the diagram below.
Add two forces of equal magnitude but opposite in direction whose line of action is parallel to the original force but with those forces acting at the centre of mas.
This results in a force $F$ (red) acting through the centre of mass which provides the translation acceleration of the centre of mass and two forces $\vec F$ and $-\vec F$ (grey) which constitute a clockwise couple of magnitude $Fd$ which will produce the angular acceleration of the rod.
So if the line of action of the applied force does not go through the centre of mass of the rod you will always get a translational acceleration and a rotational acceleration.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why are explanations of the Aharonov–Bohm effect based on trajectories? Every explanation of the Aharonov–Bohm effect that I have seen seems to justify the phase that shows up due to different paths that the particles (electrons) take to reach some point in space.
How does this make any sense in (standard = Kopenhagen) Quantum Mechanics where there are no trajectories?
In the double slit experiment (standard = Kopenhagen) Quantum Mechanics does not allow any trajectories so why is it with the Aharonov–Bohm effect legit to reason based on trajectories?
| The Bohm-Aharonov effect (well, really the double-slit experiment) does not concern one single path, but rather a sum over all paths, each being weighted by a phase (of unit modulus). That is what gives the interference pattern. The presence of a magnetic vector potential with non-zero circulation about a particular region - that region being the solenoid's interior in the case of the AB-effect - changes the respective phases of all those paths, and this shows up in the interference pattern.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Equivalent form of Bianchi identity in electromagnetism In electromagnetism, we can write the Bianchi identity in terms of the field strength tensor $F_{\mu \nu}$ as,
$$ \partial_{\lambda} F_{\mu \nu} + \partial_{\mu} F_{\nu \lambda}+ \partial_{\nu} F_{\lambda \mu} = 0,\qquad \mu,\nu,\lambda=0,1,2,3. \tag{1}$$
Now, in a textbook I am reading (Classical Covariant Fields - Burgess), the Bianchi identity is given as,
$$ \sum_{j,k=1}^3\epsilon_{ijk} \partial_j E_k + \partial_t B_i = 0,\qquad i=1,2,3.\tag{2a}$$
and
$$\sum_{i=1}^3\partial_i B_i = 0.\tag{2b} $$
However, I am struggling to see how these two forms are equivalent, i.e. starting from one equation (1), how can we arrive at the two others (2)?
| Let's start by contracting the first equation with the 4-dimensional totally antisymmetric tensor $\epsilon^{\alpha\lambda\mu\nu}$. Thanks to the properties of $\epsilon^{\alpha\lambda\mu\nu}$ we then have
$$ \epsilon^{\alpha\lambda\mu\nu} \partial_{\lambda} F_{\mu\nu} = 0 . $$
Next we separate the Faraday tensor into its temporal (0) and spatial (1,2,3) components. This gives us the electric and magnetic fields: $F_{00}=0$; $F_{k0}=-F_{0k}=E_k$; $F_{ij}=\epsilon_{ijk} B^k$. If we set $\alpha=0$ we get
$$ \epsilon^{0ijk} \partial_{i} F_{jk} = \epsilon^{ijk} \partial_{i} (\epsilon_{jkp} B^p) = 2 \partial_{i} B^i = 2 \nabla\cdot\mathbf{B} = 0 , $$
because $\epsilon^{ijk} \epsilon_{jkp}=2\delta_p^i$.
For $\alpha=i$ we have:
$$ \epsilon^{ij0k} \partial_{j} F_{0k}
+ \epsilon^{ijk0} \partial_{j} F_{k0} + \epsilon^{i0jk} \partial_{0} F_{jk} = 0 . $$
$$ - 2 \epsilon^{ijk} \partial_{j} E_{k} - \epsilon^{ijk} \partial_{0} (\epsilon_{jkp} B^p) = 0 . $$
$$ \epsilon^{ijk} \partial_{j} E_{k} + \partial_{0} B^i = 0 . $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is the difference between classical correlation and quantum correlation? What is the difference between classical correlation and quantum correlation?
| It feels like the answer was that "any correlation that is not classical is quantum". That is correct but it doesn't really explain where the quantum nature comes from and what it really is. In quantum mechanics, as opposed to classical mechanics, the outcome of an observable doesn't have to always be the same value. There is in fact a set of possible values, playing the role of the eigenvalues of the observable which becomes a hermitian matrix. That is where the actual difference between classical and quantum lies. In classical physics any observable has one and only one outcome, fluctuations around it may result in our devices not being accurate enough, errors, lack of curiosity, etc. but finally, in classical physics there is always one single possible outcome for any question. This is not so in quantum mechanics, where, even with no perturbation on the system, we may one time get the result given by one eigenvalue, and another time by another eigenvalue. This is what it means that quantum mechanics is fundamentally probabilistic, and obviously, if we take a system described by an eigenfunction of such an observable, and separate it into two subsystems, there will be way more correlation if we have a lot of possible outcomes that are described by the eigenvalue of a matrix. Such a system is described by eigenfunctions that belong to a Hilbert space that can be split according to our division into two subsystems, but it's far from splitting the states describing the whole system in the same way.
So, extra correlation in quantum mechanics comes ultimately from the nature of observables in quantum mechanics which is quite different from the nature of observables in classical mechanics...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
} |
Meaning of Fock Space In a book, it says, Fock space is defined as the direct sum of all $n$-body Hilbert Space:
$$F=H^0\bigoplus H^1\bigoplus ... \bigoplus H^N$$
Does it mean that it is just "collecting"/"adding" all the states in each Hilbert space? I am learning 2nd quantization, that's why I put this in Physics instead of math.
| Suppose you have a system described by a Hilbert space $H$, for example a single particle. The Hilbert space of two non-interacting particles of the same type as that described by $H$ is simply the tensor product
$$H^2 := H \otimes H$$
More generally, for a system of $N$ particles as above, the Hilbert space is
$$H^N := \underbrace{H\otimes\cdots\otimes H}_{N\text{ times}},$$
with $H^0$ defined as $\mathbb C$ (i.e. the field underlying $H$).
In QFT there are operators that intertwine the different $H^N$s, that is , create and annihilate particles. Typical examples are the creation and annihilation operators $a^*$ and $a$. Instead of defining them in terms of their action on each pair of $H^N$ and $H^M$, one is allowed to give a "comprehensive" definition on the larger Hilbert space defined by taking the direct sum of all the multi-particle spaces, viz.
$$\Gamma(H):=\mathbb C\oplus H\oplus H^2\oplus\cdots\oplus H^N\oplus\cdots,$$
known as the Fock Hilbert space of $H$ and sometimes also denoted as $e^H$.
From a physical point of view, the general definition above of Fock space is immaterial. Identical particles are known to observe a definite (para)statistics that will reduce the actual Hilbert space (by symmetrisation/antisymmetrisation for the bosonic/fermionic case etc...).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 0
} |
Summer Winter cause As far as I understand, there are two main reasons for having lower temperatures in winter :
*
*shorter days, so the sun has less time to heat the earth
*smaller angle of incidence, so the energy from the sunlight is absorbed in a larger area on the ground
Which of these has a bigger effect? Does it depend on the latitude?
| The smaller angle of incidence should have the greater effect, otherwise during the summer the northernmost regions (in the boreal emisphere) would be hotter than the southern ones, and, believe me, northern Scotland in August is still colder than southern Italy in March. Or think about the Poles: they have six months of summer, but are still freezingly cold
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why is bench pressing your bodyweight harder than doing a pushup? Why does bench pressing your own bodyweight feel so much harder than doing a push-up?
I have my own theories about the weight being distributed over multiple points (like in a push-up) but would just like to get a definite answer.
| While doing push-ups, you don't push your whole body weight. You have your toes on the ground, so your body weight is distributed between your feet and your arms.
While benching, you have no support from feet. You hold the whole weight with your arms, so benching your body weight is always tougher.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49",
"answer_count": 6,
"answer_id": 5
} |
Covariant gamma matrices Covariant gamma matrices are defined by
$$\gamma_{\mu}=\eta_{\mu\nu}\gamma^{\nu}=\{\gamma^{0},-\gamma^{1},-\gamma^{2},-\gamma^{3}\}.$$
The gamma matrix $\gamma^{5}$ is defined by
$$\gamma^{5}\equiv i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}.$$
Is the covariant matrix $\gamma_{5}$ then defined by
$$\gamma_{5} = i\gamma_{0}(-\gamma_{1})(-\gamma_{2})(-\gamma_{3})?$$
| *
*by the definition of the $\epsilon$ symbol:
$-\frac{i}{4!} \epsilon_{\mu\nu\rho\sigma} = -\frac{i}{4!}(\gamma^0\gamma^1\gamma^2\gamma^3 - \gamma^0\gamma^1\gamma^3\gamma^2 + ... + \gamma^3\gamma^2\gamma^1\gamma^0) = i\gamma^0\gamma^1\gamma^2\gamma^3 = \gamma_5$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Electric field associated with moving charge I have recently started to learn about the electric field generated by a moving charge. I know that the electric field has two components; a velocity term and an acceeleration term. The following image is of the electric field generated by a charge that was moving at a constant velocity, and then suddenly stopped at x=0:
I don't understand what exactly is going on here. In other words, what is happening really close to the charge, in the region before the transition, and after the transition. How does this image relate to the velocity and acceleration compnents of the electric field?
| No such exist in reality... Electrons or charged atoms never move in constant velocity and to stop one, one must apply electric o magnetic field...
Also electric field of an electron never changes regardless of how it is moving...
Isolated hypothetical cases are useless because one cannot prove it right or wrong... Reality is experimental physics and new properties are discovered and not invented...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 3
} |
2D time-independent Schrödinger Equation I'm considering the time-independent Schrödinger equation in two dimensions,
$$\frac{-\hbar^2}{2m}\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right)\psi + U(x,y)\,\psi = E\,\psi \ \ .$$
Textbooks usually consider the case of a constant or zero potential $U$ within some boundaries. The way to solve the equation would then be to separate the variables, $\psi(x,y) = f(x)\cdot g(y)$.
$U$ being constant allows to separate the equation, e.g. putting all $x$ dependence on one side of the equation and all $y$ dependence on the other side.
Now, I am interested in the case where $U(x,y)$ is not (piecewise) constant, and also not only dependent on one single variable.
It seems to me that in this case separation of the variables does not necessarily work anymore. However, in how far is this generally true? Are there classes of potentials, for which the Schrödinger equation still is separable?
Intuitively I thought that potentials like
*
*$U(x,y) = v(x) + w(y)$ or
*$U(x,y) = v(x)\cdot w(y)$
should still be somehow special in the sence that also the solution $\psi$ would be separable in one way or the other. For the case of additive separability (case 1.) of the potential (like the harmonic oscillator) this seems to be the case, while for the second case not, although they would share the same symmetry.
Is there some general law behind that, e.g. additively separable potentials give separable solutions, multiplicatively separable potentials don't? Is my intuition wrong?
| The following article may be relevant: L. P. Eisenhart, "Enumeration of potentials for which one-particle Schroedinger equations are separable", Phys. Rev. 74, 87-89 (1948) I read it a few years ago, but I don't have immediate access to it right now. I remember it contained a long list of potentials where separation of variables is possible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the electron self-energy gauge dependent? Let $\psi(x)$ be the field of the electron. Its Fourier transformed two-point function reads
$$
\langle\psi\bar\psi\rangle=\frac{1}{\not p-m-\Sigma(\not p)}.
$$
If we calculate $\Sigma(\not p)$, we observe that it depends on the gauge parameter $\xi$, which in principle is not a problem because $\Sigma(\not p)$ is not observable by itself.
But if we think of a gauge transformation as taking $\psi\to\mathrm e^{i\alpha(x)}\psi(x)$, then the two-point function should satisfy
$$
\langle\psi\bar\psi\rangle\to \langle\psi\mathrm e^{i\alpha(x)}\mathrm e^{-i\alpha(x)}\bar\psi\rangle=\langle\psi\bar\psi\rangle
$$
Therefore, one would naïvely expect $\Sigma(\not p)$ to be gauge invariant, and therefore it shouldn't depend on $\xi$. What is the solution to this contradiction? Why do our expectations fail?
| The propagator $S(p)$ is the Fourier transform of the two-point function
$S(x,y)=\langle\psi(x)\bar\psi(y)\rangle$,
$$
S(p) = \int \frac{d^p}{(2\pi)^4} \, \exp(-ip\cdot(x-y)) \, S(x,y)\, .
$$
Note that because of Lorentz invariance $S(x,y)$ does not depend on $x+y$. Clearly, the two-point function is non-local and not gauge invariant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Higher-Order Derivatives in the Lagrangian I am trying to derive the equations of motion for a Lagrangian which depends on $(q, \dot{q}, \ddot{q}).$ I proceed by the typical route via Hamilton's Principle, $\delta S = 0$ by effecting a variation $\epsilon \eta$ on the path with $\eta$ smooth and vanishing on the endpoints. After some integrating by parts and vanishing of surface terms, I arrive at (to first order in $\epsilon$):
$$\delta S = \int\left[\eta\frac{\partial L}{\partial q} - \eta\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{\partial L}{\partial \dot{q}}\right) + \eta\frac{\mathrm{d}^2}{\mathrm{d}t^2}\left(\frac{\partial L}{\partial \ddot{q}}\right) + \frac{\mathrm{d}^2}{\mathrm{d}t^2}\left(\frac{\partial L}{\partial \ddot{q}} \eta \right)\right]\mathrm{d} t.$$
It is clear to me that either the last term in the integral above should vanish, or else I made an error and it ought not to appear at all. If it is the former case, by what argument does this term vanish?
| You have to impose that
$\eta(t_0)=\eta(t_1)=\dot{\eta}(t_0)=\dot{\eta}(t_1)=0$ where $t_0$ and $t_1$ are the endpoints of the time interval over which you are integrating. Then, the last term is:
\begin{equation}
\int_{t_0}^{t_1}\frac{d^2}{dt^2}
\left(\frac{\partial L}{\partial\ddot{q}}\eta\right)dt =
\left[\frac{d}{dt}\left(\frac{\partial L}{\partial\ddot{q}}\eta\right)\right]_{t_0}^{t_1} =
\left[\eta\frac{d}{dt}\left(\frac{\partial L}{\partial\ddot{q}}\right)\right]_{t_0}^{t_1}+
\left[\dot{\eta}\frac{\partial L}{\partial\ddot{q}}\right]_{t_0}^{t_1} = 0
\end{equation}
The Euler-Lagrange equation is then:
\begin{equation}
\frac{\partial L}{\partial q} -
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}}\right) +
\frac{d^2}{dt^2}\left(\frac{\partial L}{\partial \ddot{q}}\right) =
0
\end{equation}
As a justification for the conditions over $\eta$ and its derivative at the endpoints observe that, in general, $\partial L/\partial\ddot{q}$ may depend on $\ddot{q}$, so the equation of motion will be of fourth order. To obtain a solution, four conditions will be needed. In the case of $L$ depending only on $q$ and $\dot{q}$, for a second order equation we needed two conditions: fixing $q(t_0)$ and $q(t_1)$. In the fourth order case, it is reasonable to fix $q(t_0)$, $q(t_1)$, $\dot{q}(t_0)$ and $\dot{q}(t_1)$.
Therefore, as $\delta q=\epsilon\eta$ and $\delta \dot{q}=\epsilon\dot{\eta}$ we have that $\eta(t_0)=\eta(t_1)=\dot{\eta}(t_0)=\dot{\eta}(t_1)=0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can a telescope look into the future? If a telescope can see the past, can it look into the opposite direction and see the future?
I suppose I am trying to put time into a single line. (timeline) with a beginning and end, and we are in the middle.
If I can look out in any direction and see the photons that are billions of years old. That would mean the past is surrounding me in every direction. I'm in the present. It seems like that puts me in the center.
| When you see or hear anything you are perceiving the past. Any sound, any lightwave, takes a finite amount of time to travel from its source to its receiver. The telescope is just a fancy version, for light waves, of a hearing horn used by deaf people before electronics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How can the entropy of the universe increase when undergoing an irreversible process? So there’s something about entropy that I just can’t wrap my head around. So: we saw in class that when undergoing a reversible process, the entropy change of the universe (so of the system + the environment) is 0. And we saw that to calculate the entropy change of a system between two states, we forget about the original process, then we design a reversible path that links the $2$ states, and we calculate the entropy change along that path (and this entropy change of the system can be positive or negative). And then, if we want to calculate the entropy change of the universe, we imagine that our reversible process is driven by a Heat Engine or Heat Pump, and we calculate the change of entropy of the environment, which we then add to the change of entropy of the system.
==> Now, what I don’t understand is the following: when undergoing an irreversible process, the entropy of the universe increases. How is that possible? Since the whole point of calculating entropy is forgetting about the original process and designing a reversible path, and we saw at the very beginning that along a reversible path the entropy change of the universe is equal to $0$.
I know there’s something that I’ve misunderstood somewhere, but I don’t know what and this thing has been driving me mad for some time.
I hope someone will have the time to answer this!
| Consider an irreversible process between states a and b,
We write $dU = dQ_{irr} - dW_{irr}$
For a reversible process between a and b,
$dU = TdS - dW_{rev}$
Sine dU is the same for both, we have
$dS = \frac{dQ_{irr}}{T} + \frac{( dW_{rev} - dW{irr})}{T}$
It is obvious that the second term is non negative at all times.
Conclusion, for any thermodynamic process
$dS \ge \frac{dQ}{T}$
Hence, the change in entropy of the universe is strictly greter than zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Would an Electron Gun create thrust in space? Using solar panels, and the resulting electrical energy, could an electron gun provide a suitable level of renewable thrust, better than an Ion thruster? If it would even create thrust at all that is.
| There is another aspect to this, that is the level of energy imparted to the electrons, and the quantum state they are in.There will come a time, when we have the ability to impart near relativistic velocities to particles, and to change the way they behave. Bunching electrons into large Bose Einstein condensates will enable the macro effects of like charges acting repulsively to be more efficient in creating packets of thrust. However, you still can't get around the fact that you are limited by the initial rest mass of the electron, which means that although you can get some thrust when at relativistic energies, it still requires vast amounts of power to get them there, the only advantage of this system is that light is everywhere, and your "fuel" source is virtually inexhaustible, albeit orders of magnitude less and less effective as you move away from strong sources of photons, ie, stars...the better solution is to utilize matter, either as particles or ions, with a wide range of usable feed stocks is the way to go.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If time dilation can slow time down, is there away to speed time up? Okay, I know the title is really confusing but I couldn't find words to explain it sorry. Pretty much what I mean is, if I can get in a lightspeed spaceship moving away from earth, time slows down for me. So one year for me will be 20 earth years or what ever. But is there away were I can reverse this? Where if I get on that same craft and travel for a year but it only will be a few months on earth? I know this is just a random thought.
| Being in a gravitational field is equivalent to accelerated frame... So we may accelerate towards the earth (in 0 g surrounding, so that acceleration causes us to experience gravity pulling us back in direction opposite to that of we are accelerating towards), so that while we experience the flow of time normally, time in front of us gets slow. And therefore, the time on earth will flow slow, causing few months to pass while we travel years in our time frame.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Concept of strain as applied to time What if we were to measure gravitational force as a function of strain in time $S_t$ as defined by $S_t=\frac{T_\mathrm{ref}-T_\mathrm{local}}{T_\mathrm{ref}}$ where $T_\mathrm{ref}$ is the rate of time at a massless reference clock at infinite distance from mass and $T_\mathrm{local}$ would be the rate of time in the local gravitational field. This would be the equivalent of strain measurement of a solid specimen under tension where we are looking at % elongation.
Has anyone done any work in this direction, that is looking at changing the units of measure of distance from meters from a singularity to a unit of the warp of spacetime for the purpose of orbital mechanics calculations?
| It's an interesting questions, Schwarzschild radii (SR) could be used as unit you are looking for since space and time warp relative to that and not distance as you are suggesting.
One SR away from a black hole will have the same time dilating effects and gravitational potential no matter what the size of the black hole. However SR change depending on the size of the BH/mass you calculating against.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why center of mass formula is $m_1 r_1 = m_2 r_2$ for a two particles system? In this website, it states that if we have a two particles system and measure from centre of mass, then the following equation holds:
$$m_1 r_1 = m_2 r_2$$
where $m_1, m_2$ are masses of the two objects and $r_1, r_2$ are distances from centre of mass to the two objects.
Question: How to obtain the above equation?
Centre of mass is defined to be the weighted sum of all moments. So it is not surprising that centre of mass can be expressed as follows:
$$x_{cm}= \frac{m_1 x_1 + m_2 x_2}{m_1 + m_2}$$
where $x_1, x_2$ are distances from a reference point to the two masses.
However, I have no idea on how to obtain $m_1 r_1 = m_2 r_2$. It seems to me that ratio of masses equals to ratio of distances.
| I am not so sure about that expression; a definition of the CM system is that $$m_1 r_1^{cm}+m_2 r_2^{cm}=0.$$So $$m_1 r_1^{cm}=-m_2 r_2^{cm}$$ holds. This can also be shown in the laboratory system with the $x_{cm}$ definition (which directly follows form the first eq. I wrote).
Looking at the picture on the page I would say they use $r_i$ as positive distance form the angle point and not as coordinates. If you use $r_i$ as coordinates it has to be $$m_1 r_1=-m_2 r_2 \Leftrightarrow m_1=-m_2 r_2/r_1$$ and one $r_i$ will be negative so $m_i$ stays positive. If you use distances the minus sign would have to drop out to ensure positive masses.
To be honest that page is at best incomplete; they introduce $r_i$ as "where $r_1$ and $r_2$ locate the masses" without drawing them into their figure and without specifying what they mean with "locate".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Can a rocket with no forces acting upon it except a single push force with constant acceleration keep accelerating forever? I was wondering why a rocket with no opposing forces acting upon it couldn't keep accelerating given that it has the potential to release enough energy to maintain its acceleration at all costs. I have heard that any object with mass cannot reach the speed of light. Is that true? If so, why?
| To accelerate a body in space one need a thrust. Since nothing is faster the electromagnetic radiation all you can do is to accelerate your rocket with light. This indeed is possible, so sealing with the lights pressure from the sun is technical makeable. Using a light projector you can also accelerate a rocket.
But even with
the potential to release enough energy to maintain its acceleration
you would not be able to get a velocity greater the speed of light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Without the Michelson-Morley experiment, is there any other reason to think speed of light is the universal speed limit? If the Michelson-Morley experiment hadn't been conducted, are there any other reasons to think, from the experimental evidence available at that time, that Einstein could think of the Special Theory of Relativity?
Is there any other way to think why the speed of light is the ultimate speed limit?
| The strongest current experimental evidence is the standard model of particle physics, the beautiful symmetries of SU(3)xSU(2)xU(1) with the plethora of data that produced them, would fall on their face if c were not the limiting velocity, i.e. if special relativity did not hold.
Every single mass measurement in the particle data book , comes from using energy and momentum conservation equations based on the algebra of the four vectors of special relativity and thousands upon thousands of measured events .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 10,
"answer_id": 3
} |
Confusion about probability of finding a particle The wave representation of a particle is said to be $\psi(x,t)=A\exp\left[i(kx−\omega t)\right]$.
The probability of the particle to be found at position x at time t is calculated to be $\left|\psi\right|^2=\psi \psi^*$ which is $\sqrt{A^2(\cos^2+\sin^2)}$. And since $\cos^2+\sin^2=1$ regardless of position and time, does that means the probability is always $A$? I think I am doing something wrong but I know what!
| It is not true that the probability of finding the particle at $x$ is $|\psi|^2$ (think of it as if you have a continuum of possible values, what is the probability of obtaining an specific value?). As it has been pointed out, $|\psi|^2$ can be interpreted as a probability density, so the probability of finding a particle between $x$ and $x + \mathrm dx$ is $|\psi(x)|^2\mathrm dx$. Integrating you can compute that probability for a specific interval.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Electron emission from insulators? can electron emission happen from insulators? I mean can the electrons in an insulator jump into the vacuum around them when sufficiently large electric fields are applied like in conductors?
| Yes it can. In an insulator, electrons are bound on Atoms, but do not form an electron gas as in metals. With electromagnetic Radiation of high intensity, electrons can be emitted from the Atom. A photon with frequency $\nu$ carries an energy $E=h \nu$. An electron of the exterior Atom hull will be emitted if the photon energy is greater than the electrostatic energy $E_s = |- \frac{Ze^2}{4 \pi \epsilon r} + \sum_{j \in E} \frac{e^2}{4 \pi \epsilon r_j}|$ with the set of all other electrons $E$, the electron-nucleus distance $r$ and electron-nucleus distances $r_j$ for all other electrons.
This formula is classical; for Quantum Treatment you must know the Quantum numbers of the electron to compute energy $E'$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why are hollow conductors used for signals of a certain frequency? So this question is really about skin depth. I have been introduced to the skin depth by a simple model (simple equation for electrons in a metal with a damping term) of polarisability for a metal. In this calculation, using the dilute form of the classes most relation the permittivity of the metal was found. This had a complex form, and thus so did the refractive index.
As a result the wave vector of any electromagnetic signal is complex and the complex part attenuates the wave and represents energy loss. I understand this part. However, I was then told that this is the reason that sometimes hollow conductors are preferred (like a hollow copper tube). Because beyond the skin depth the field is rapidly attenuated. However, surely the attenuation only takes place in the direction of the wave vector k? How would a hollow tube cary a signal? If the k vector is along its long axis? Surely it gets attenuated by the time it reaches the end?
| For microwaves the skin depth is too small and it does not conduct efficiently, so you have to design it and use it as a waveguide. For lower frequencies the skin depth causes the conduction to be near the surface. See the answer and comments already given at Does electricity flow on the surface of a wire or in the interior?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Are volume of solids & liquid totally independent of pressure In my book it is written as
$GASEOUS$ $STATE$ : The state is characterized by sensitivity of volume change with change of pressure and temperature.
Now my doubt is that are volume of $solids$ & $liquid$ totally independent of pressure??
I searched on internet but do not get any proper answer .
| No, the volume of liquids and solids does depend on pressure. However, the volume of gases is drastically more sensitive to pressure. (This property is known as compressibility.)
In most contexts, the dependence of volume on pressure for liquids and especially solids is considered negligible (that is, most liquids and solids are approximately incompressible). In mathematical terms, this means $\frac{dV}{dP} \approx 0$ (the rate of change of volume with respect to pressure is essentially zero).
As before stated, the volume of gases depends significantly on pressure. In fact, in the ideal gas model (a good approximation for most gases), the two "state variables" are inversely proportional: $V \propto \frac{1}{P}$ (Boyle's Law).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is Sachdev-Ye-Kitaev (SYK) Model model important? In the past one or two years, there are a lot of papers about the Sachdev-Ye-Kitaev Model (SYK) model, which I think is an example of $\mathrm{AdS}_2/\mathrm{CFT}_1$ correspondence. Why is this model important?
| SYK model provides us with the simplest example of holography which is much easier to study than canonical $AdS_5 \times S^5$ case due to much lower dimensionality. It was the initial motivation for Kitaev to study this model. Here is a set of 2 lectures in which he briefly discusses it.
Because of its simplicity, it is easy to consider the thermal and chaotic behavior of this theory and its gravity dual. Look at the following papers for the details:
Maldacena, Stanford "Comments on the Sachdev-Ye-Kitaev Model". It describes the correspondence in details.
Maldacena, Stanford, Yang "Conformal Symmetry and its Breaking in Two Dimensional Nearly Anti-de-Sitter Space". This paper describes the gravity side of the correspondence. In particular, modified gravity on the N(early)AdS space on which the bulk theory must live, because usual GR is trivial in 2D.
Shenker, Stanford "Stringy Effects in Scrambling". Here the stringy effects which must be taken into account in addition to field-theoretical gravity in the bulk are discussed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 3,
"answer_id": 2
} |
Why wouldn't the part of the Earth facing the Sun a half year before be facing away from it now at noon? The Earth takes 24 hours to spin around its own axis and 365 days to spin around the Sun. So in approximately half a year the Earth will have spun around its axis 182.5 times.
Now take a look at the following picture:
Assuming that the Earth is in the position on the left is, say, on 1st of Jan. 2017 and in the position on the right, half year after. The Earth will be roughly on the opposite side of the Sun given that half a year passed, is that correct? If at noon, half a year earlier, that part of the Earth was facing the Sun, then why wouldn't the opposite part of the Earth be facing the Sun now, after 182 complete rotations and the Earth being on the opposite side of the Sun? We expect the noon-time to occur on the dark side instead of the lighted side.
Shouldn't this cause the AM/PM to switch, the rotations made are consistent with 182 passing days. Assuming it's noon at both dates, why does the Earth face the Sun at the same time on both sides of the Sun?
| Our clocks are set so that 24 hours is the time for the Sun to appear in the same part of the sky. What this means in terms of the Earth's orbit and rotation is that the Earth does slightly more than a complete rotation in 24 hours.
Let's say that your picture is drawn from the perspective above Earth's north pole. Earth rotates and orbits counterclockwise. Draw a line on the right-hand side Earth from the point closest to the sun (where it is noon) towards the sun. After 24 hours, the Earth will have moved about 1/365 of the way around it's orbit, and the line will have rotated just a bit more than 360 degrees so that it is pointing at the sun again.
The time where the line from Earth is parallel to the original line before rotation is called a sidereal day, and is 23 hours and 56 minutes long.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 3,
"answer_id": 1
} |
Different frictional forces- damped harmonic motion What classifies as damped harmonic motion? All of the books/Web pages I have looked at about damped harmonic motion have used a damping force that is proportional in magnitude to the velocity, even if it is not appropriate for a particular problem. For example the equation is generally derived with a mass on a spring situation with friction between the mass and the floor, however this friction should be constant and independent of the velocity.
I tried to find a solution myself to the constant friction problem (although I had to restrict myself to considering only half a cycle because otherwise the force would be in the wrong direction. I am not too familiar with solving differential equations (although this is quite a simple one!) And the solution I got to
$m\ddot x +kx +F=0$
Was
$x=Acos (\omega t +\phi ) -\frac {F}{k} $
Which is clearly wrong as then the amplitude isn't decaying.
But I guess my main question is: is damped harmonic motion only for resistive forces proportional to the velocity?
| As far as I understand differential equations and simple harmonic motion, the reason that your solution doesn't display decaying amplitude is simple. you assumed that the force F is constant for all time t, and a system of spring force with an additional constant force, simply doesn't have decaying amplitude (you can think of the influence of the constant force, as moving the point of net force zero of the spring), it's amplitude doesn't change, and the value for x(t), respresnts the distance from the point, at which the spring force cancels the constant force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Tunnel Effect and wave in classical mechanics and in quantum mechanical My question is: from the point of view of classical mechanics, when a wave encounters a barrier, it is totally transmitted through the barrier, while in quantum mechanical there is also a part of the wave that is reflected? Or is it the opposite? If I calculate the transmission and reflection coefficients in the classically accessible and inaccessible regions I conclude yes. However, I understood that the tunneling effect is the phenomenon whereby, in quantum mechanics, the wave can pass through the barrier as if there was a Tunnel.
| If you have a particle wave impinging on a finite width potential barrier, you will always have a quantum mechanical reflection and transmission, for a particle energy below or above the barrier height. Classically, there is only a transmission for an energy above the barrier height and only a reflection for an energy below the barrier height.
The quantum mechanical wave transmission coefficient for an energy below the barrier height gives the probability that the particle will be transmitted. This effect is called tunneling. The reflection coefficient gives the probability that it will be reflected. Quantum mechanically, there is also a probability for reflection even when the particle energy is above the poetential energy barrier height.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Why Weyl's (infinitesimal geometry) theory of gravity is physically unreasonable? It is usually mentioned that Weyl theory is generalization of the Einstein’s gravity that included Electromagnetism.
In the same time, they saying that it is physically unreasonable and inconsistent with QM. Is it possible to explain in a simple way why it gives unreasonable results?
And since it is not kind of an ad-hoc generalization, but a very natural one mathematically speaking (as I understood). Then are there any particular reasons why nature still "prefers" to work by the special case (GR)?
P.S
Regarding last question: Kaluza-Klien theory dose a similar thing, also it can be quantized, however the results (particles) simply do not represent our reality, but it is dose contains some ad-hoc assumptions, what I do not see in Weyl's.
| I hope it is clear to OP that the validity of physical models can only be judged by experimental verification.
Weyl's theory explains electromagnetism through an extended affine connection. Parallel transport with respect to this extended connection no longer preserves the interval $g_{\mu \nu}(x) v^{\mu} v^{\nu}$ of the vector (like in General Relativity).
As a result, Weyl's theory gives a falsifiable prediction: there's gotta be time differences between the same process within the strong electromagnetic field, and in vacuum. In particular, the spectrum of Hydrogen is predicted to shift in the presence of the strong electromagnetic field. This is contrary to observations.
It was Einstein's original argument which lead him to discard Weyl's theory of gravity.
Weyl's theory is not inconsistent (at least in its classical form). It was falsified by experiment, which is the real reason for it to be considered unphysical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can someone give me an intuitive meaning of the term "moment of area".? Moment if inertia is the rotational analogue of mass. But I can't get the idea of the term "moment of area". What does it mean?
| In general, a moment is a quantity that describes the shape and position of something. In statistics, the mean and standard deviation are the first and second moments of a distribution; they are numbers that tell the reader approximately where the distribution is located (the mean) and approximately how wide it is (the standard deviation).
In physics, we have similar notions for the location and shape of an object, which we call the moments of area. The first moment of area tells us where an object is (i.e. the location of its center of mass). When we analyze a rigid rotating body, we usually do so in a frame that places this at the origin, which is why it is usually irrelevant in calculations. The important quantity for physicists is the second moment of area, which, much like the second statistical moment (the standard deviation), tells us how wide an object is. You might recognize this quantity (with proper unit conversion) as the typical moment of inertia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Momentum state of a particle Why is the momentum state of a particle in quantum mechanics given by the Fourier transform of its position state? For instance, in one dimension given by
$$\varphi(p)=\frac{1}{\sqrt{2\pi\hbar}}\int \mathrm dx \, e^{-i p x/\hbar} \psi(x).$$
| In general, a Fourier transform takes functions on a group $G$, or a space $X$ on which $G$ acts, and decomposes them in terms of characters of the group, such as $\chi : G \to S^1$, and the coefficients of the decomposition are encoded in the transformed function in the Pontryagin dual $\hat G$ of $G$.
Now for Euclidean space, $\mathbb R^n$, we can identify the Pontryagin dual $\hat{ \mathbb{R}}^n$ with itself. In particular, it is a locally compact group with $\mathbb R^n$, identifying $\xi \in \mathbb R^n$ as the frequency, for which $x \to \xi \cdot x.$ The Pontryagin dual in general is the group of all characters of $G$.
In general, the Fourier transform for $f \in L^1(G)$ is given by,
$$\hat f (\chi) = \int_G f(x)\bar{\chi(x)} d \mu(x)$$
where $d\mu$ is the Haar measure. Specializing now to the aforementioned case, one has,
$$\hat{f}(\xi) = \int_{\mathbb R^n} f(x)e^{-2\pi i \xi \cdot x} dx.$$
If we interpret the domain of $f$ as time, then the corresponding domain of the transform is in frequency space. For position, one has momentum space. That we can take $\psi(x)$ to $\hat \psi(p)$ is not exclusive to the wavefunction but can be performed on any suitable function.
Another check: in the exponential one has $e^{-i\omega t}$ and so one can deduce $\omega$ has dimensions of frequency for the argument to be dimensionless. Now, for position, you'd get something like $[L]^{-1}$ which is actually the wave vector $k$, but $p = \hbar k$ and so we can express the Fourier transform either in terms of the wave vector or the momentum.
We also normally work in natural units wherein $\hbar = 1$ and so we use either interchangeably.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
How to get the force constant? Suppose we have a spring with a difference . When it is streched by x , the restoring force is not proportional to x instead ,
F = $x^3$ + $x^2$ + $x$
Now , for normal springs F = kx
where k : Spring constant
If we want to find out the spring constant for the given spring then how will we proceed ? (For this I think we need the definition of spring constant for such cases)
| Yes indeed: we need the definition of spring constant for such cases. For small enough $x$ you can neglect the $x^2$ and $x^3$ so that
$$
F\approx x
$$
which means that $k=1$. For a more general force, we can always define
$$
k\equiv \lim_{x\to x_\mathrm{eq}}\frac{\mathrm dF}{\mathrm dx}
$$
where
$$
F(x_\mathrm{eq})\equiv 0
$$
defines $x_\mathrm{eq}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is really instantaneous? How can a body travel at an instant and what does instantaneous speed tells us?
What really is meant by speed of an object at an instant if an object does not travel at an instant? I would like a mathematical explanation.
| Instantaneous (linear) speed isn't that helpful. In planar rigid body mechanics instantaneous rotational speed and the point about which the body is rotating is extremely helpful as it describes the velocity of the body at every other point (ie. describes the motion completely).
For example see my answer in Rotation of Slipping Ladder where the motion of the ladder is described in terms of the instant center of rotation (Point S below).
The helpful insight extends to the fact the once the linear velocities of the body are described everywhere, any reaction forces applied have to be perpendicular to the motion (reactions do no work). So the direction of the reaction forces is always pointing towards the instant center of rotation.
Note that the instant center is not fixed in space. At every instant it can be located at a completely different point. There is no concept of speed of the instant center. It is just a point in space valid only for one instant in time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Evolution of neutrinos flavor states What do we mean by saying that neutrino flavor states do not satisfy the schrodinger equation? How does the time evolution of states look like?
| Neutrino's flavor states evolve with time by a process called oscillation in which the three definite mass eigenstates that each flavor has associated with it exist in superposition.
A neutrino may be created in one flavor, an electron neutrino for example, and be considered in a super position of electron, muon, and tau neutrinos with only non-zero eigenvalues for the electron neutrino component. However, as it travels (a process which takes time, letting it evolve) the eigenvalues of the muon and tau components become non-zero, and the mass becomes a mixture of the definite associated masses.
This discovery is what the 2015 Nobel prize in physics was awarded for
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do we assume weight acts through the center of mass? The weight of a body acts through the center of mass of the body. If every particle of the body is attracted by earth, then why do we assume that the weight acts through the center of mass? I know that this is true but I can't understand it. Does it mean that the earth does not attract other the other particles of the body ? Wouldn't it mean that girders would not need any support at the periphery if we erect a pillar at the center?
| As a point of clarification which perhaps has not been made as clearly in the other answers: No, the weight of a body does not act through the center of its mass, and no such assumption is necessary. However, one can show (see the answer by @tomph) that the sum of all gravitational forces (which indeed do act on any small part of the body) can be replaced equivalently by a single force through the object's center of mass, if that object can be thought of as rigid. The "equivalently" in this statement refers to the fact that, when we calculate e.g. the forces required to hold such an object in place (by "pillars", say), the result will be exactly the same whether we use the single weight force acting through its center of mass or the actual weight distribution of the object.
In short, the model of the single force acting through the center of mass is a very convenient simplification, but it's neither a necessary assumption nor does it reflect reality. As others have said, once we want to describe the behavior of deformable bodies or discuss interior load distributions in a body, this model is no longer adequate or useful, or even correct, as pointed out by @probably_someone.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 6,
"answer_id": 3
} |
How do Faddeev-Popov (FP) ghosts help path integrals? How does the inclusion of Faddeev-Popov ghosts in a path integral help to fix the problem of over counting due to gauge symmetries?
So, after exponentiating the determinant for the inclusion of either anti-commuting or bosonic variables and the corresponding extension to a superspace theory... why does that solve the problem exactly?
| The ghosts are not so much inserted, as they naturally arise. The path integral of a gauge theory naively defined will integrate over all fields, including those related by a gauge symmetry, which are seen by the theory as being equivalent.
The Faddeev-Popov procedure provides a means to split our integration over physically distinct configurations and those over gauge orbits. Consider the case of non-Abelian gauge theory, with,
$$\int \mathcal D[A] \exp \left[i \int d^4x \left( -\frac14 (F^a_{\mu\nu})^2\right) \right].$$
To integrate only over physically distinct configurations, we need to constrain the integral by a gauge-fixing procedure, $G(A) = 0$ in general. To fix $G(A) = 0$, we can use a delta function, but to do so, we need to take into account the appropriate Jacobian factor, so the identity is,
$$1= \int D[\alpha(x)] \delta(G(A^\alpha)) \det \frac{\delta G(A^\alpha)}{\delta \alpha}$$
where $A^\alpha$ is the field transformed, that is, $(A^\alpha)^a_\mu = A^a_\mu + g^{-1}D_\mu \alpha^a$. We then have the path integral,
$$\int \mathcal D[A] \, e^{iS[A]} = \left(\int \mathcal D[\alpha(x)] \right) \int \mathcal D[A] e^{iS[A]} \delta(G(A))\det \frac{\delta G(A^\alpha)}{\delta \alpha}.$$
We essentially factored it into integrations over the gauge orbits and physically distinct solutions. Now, for an $n \times n$ matrix, $M$, we can express the determinant as a Grassmann integral, namely,
$$\int e^{-\theta^T M \eta} d\theta d\eta = \det M$$
where we have vectors of Grassmann variables, $\theta$ and $\eta$. Going back to the path integral, the determinant is the determinant of a differential operator, and so we use an analogous formula to compute it. We then interpret the analogous $\theta$ and $\eta$ as being fields, or ghosts.
To put it yet another way, we essentially introduced dummy variables in order to express the determinant as an integral, and it turns out this integral when included has the same form as a Lagrangian for these variables, and so we can interpret them as fields.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Solar panel frequency range What is a solar panel's frequency range (i.e. from THz to THz)? Is there a way to capture energy that exceeds that frequency range, either more towards IR or UV? If so, you could produce energy from sound, considering its frequency is 20-20,000Hz.
| Currently the best way to generate electricity from sound is a piezoelectric transducer. You can find these in some microphones. Your garden-variety piezo buzzer (such as the little black cylindrical "speakers" in desktop computers) is capable of generating current from sound, though it's generally used to produce sound from current.
I'm not certain, but I believe piezoelectric transducers are generally optimized for a particular frequency range (based on certain parameters such as the diameter and thickness of the element), so you'd want to find one that is tuned for the predominant sound frequency you expect your system to encounter. However, most of these generate microamps or maybe milliamps, with the voltage being dependent on the size of the transducer and the loudness of the sound -- most sounds will likely generate 3-5V at best (such as clapping your hands hard near it).
This is just an idea to get you pointed in the right direction, hopefully others will be able to contribute more; I welcome suggestions that could improve this answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/302301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits