Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Velocity of the touching point between 2 rotating circles I'm trying to solve the following problem that I'm having a hard time with: We have circle ${\Sigma}_1$ with center $O_1$ and radius $a_1$. The center $O_1$ is also the center of the static orthonormal coordinate system $R_0 (O_1, x_0, y_0, z_0)$. ${\Sigma}_1$ rotates at the angular speed ${\omega}_1$. Be the circle ${\Sigma}_2$ with center $O_2$ and radius $a_2<a_1$ rolling without slipping on top of ${\Sigma}_1$ at a constant angular speed ${\omega}_2$. We call I the touching point between the two circles. Be ${\Sigma}_3$ a solid keeping ${\Sigma}_1$ and ${\Sigma}_2$ in contact. The coordinate system $R_0$ defined by $(O_1, _0, _0, _0)$ is fixed and does not rotate. The coordinate system $R$ defined by $(O_1, , , )$ is mobile and fixed to ${\Sigma}_3$ and rotates around $z≡z_0$ at the angular speed ${\omega}_3$. I need to find the velocity of I in the $R_0$ coordinate basis when $\omega_1=0$ and then find $\omega_3$ as a function of $\omega_1$ and $\omega_2$. I know how to express the velocity of I when $\omega_2=0$ which I solved, but after trying for more than $2$ hours with different methods like changing coordinates systems and creating a third one centered at $O_2$, I could not find a satisfactory answer. Does any of you have an answer? Thanks!
The velocity at the contact point I is: $$v_I=\omega_1\,a_1-\omega_2\,a_2$$ thus the components of the velocity in $R_0$ coordinate system are: $$\vec{v}_0=v_I\,\begin{bmatrix} \cos(\varphi)\\ \sin(\varphi) \end{bmatrix}$$ with $\varphi=\int\,\omega_3(t)\,dt=\omega_3\,t\quad $ if $\quad\omega_3=$constant
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Electric field energy density In vacuum, the energy density of the electric field is given by $\mathcal{E}=\epsilon_0\frac{E^2}{2}$ with $E$ the total electric field present. So, if you have a static $E_0$ and dynamic $e(t)$ field, the energy density becomes $$\mathcal{E}=\epsilon_0\frac{\left[E_0+e(t)\right]^2}{2} = \epsilon_0\frac{E_0^2 +2E_0e(t)+e(t)^2}{2}\,.$$ Is this correct? What does the term $2E_0e(t)$ physically represent? It looks like an additional energy contribution from the interaction between the two fields...
It is correct. Note however that, when dealing with periodic electric fields, one would often be interested in the energy density averaged over a period, where the cross-term disappears: $$e(t) = E_1\cos(\omega t),\\ \bar{\mathcal{E}} = \frac{\omega}{2\pi}\int_{t_0}^{t_0 + 2\pi/\omega}dt \frac{\epsilon_0|E_0 + E_1\cos (\omega t)|^2}{2} = \frac{\epsilon_0E_0^2}{2} + \frac{\epsilon_0 E_1^2}{4} $$ The general expression for the energy density can be rigorously obtained from the Maxwell equations within the context of the Pointing theorem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why supersymmetry can only be verified in high energy level? I'm wondering why supersymmetry can only be verified in high energy level,can we check supersymmetry in low energy physics?
It can! The question isn't very specific, so I'll only answer broadly. Of course it all depends on what you mean by high and low energies, but many naive supersymmetric models you might write down will affect low-energy physics. If your model predicts that the proton will decay quickly, or that a new particle will be created if you collide two electrons together at 1 GeV, then it's easy to test. But experiments have found that the proton lifetime must be long (greater than $10^{34}$ years), and we know the particles produced by electron scattering at these energies. The issue is that the Standard Model predicts low-energy physics extremely well. So if the SUSY model modifies low-energy physics in a way we don't observe, then it's wrong. There are still many experiments not involving particle accelerators which can put bounds on and rule out supersymmetric models, e.g. measurements of the electron electric dipole moment, proton lifetime, dark matter searches, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is the frequency of a wave defined if it propagates on three different directions? Let's consider a wave which propagates on 2 or three directions, like for instance an electromagnetic wave inside a rectangular waveguide totally closed on two ideal conductor surfaces: The walls of the guide force the wave to assume an integer number of half-wavelenghts along x,y,z: $$l_{x,y,z} = m_{x,y,z} \cdot \frac{\lambda}{2}$$, with m integer. When we indicate a certain mode, such as $TM{2,1,1}$ we mean that there are 2 half-wavelength along x, 1 along y and 1 along z. Suppose now $$l_{x,y,z} = l$$ (i.e. all dimensions are equal: the waveguide is a cube). Obviously lambda will be different for x,y,z: $$\lambda_x = \frac{2l}{m_x}=l$$ $$\lambda_y = \frac{2l}{m_y}=2$$ $$\lambda_z = \frac{2l}{m_z}=l$$ So, three different wavelenghts. What does it mean? In physics I have always studied that frequency corresponds to wavelength, if the propagation medium is fixed. What is the definition of frequency in this case?
Aside from the doppler effect or relativistic dialation, the frequency of a wave is generally determined by its source. If an electromagnetic wave enters a wave-guide at an angle, it reflects from the walls of the tube. The reflections interact with each other to produce an interference pattern. Depending on the angle of reflection, the pattern can have various wavelengths or velocities but the wave which leaves the other end of the tube will have the same frequency as the one which entered.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Will magnets lose strength over time when coupled to another magnet? Will magnets lose strength over time when coupled to another magnet ? Im designing a part (c. 200g) which needs to be held in place, but also be removable, so looking into a pair of magnets (neodymium or other) to do this, the magnets will for >99% of their life be coupled to one another, but wondered if over time the strength of the magnets would decrease / diminish ?
My answer is, in general, no. Ferromagnetic materials usually, on a microscopic level, have domains in which the magnetic moments are all aligned in one direction. There is a famous picture from R. W. DeBlois that shows domains within a sample of nickel (the arrows indicate the domain orientation): Kittel (Introduction to Solid State Physics) tells us that an external magnetic field can increase the magnetic moment of a specimen such as this by: * *In weak fields, increasing the volume of domains favorably oriented with respect to the external field by shrinking those unfavorably oriented, and *In stronger fields, actually rotating the magnetization of domains. Your magnets will already have a general N-S orientation meaning there are already large domains with that orientation. I am assuming in your design, you are coupling your magnets in a N-S/N-S order - in other words, you are not orienting them with opposing fields. So you may find that your magnets get stronger over time as the smaller domains grow and re-orient. I think as long as you stay below the Curie temperature, you will not see any decrease in magnetic strength. Anyway, these are my thoughts. Edit: I did look at a couple of magnet suppliers and they indicate an increase in strength when coupling two or more together. See question #22 in the link below. But they do not discuss the strength over time. https://buymagnets.com/faq/#:~:text=Yes%2C%20stacking%20two%20or%20more,holding%20force%20is%20slightly%20reduced.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Cavitation plus sound equals light? “If an underwater bubble is collapsed by loud sound, light is produced and no one knows why” says one of those click-bait social media posts with no citation—“light produced” and “no one knows why.” Is either true? Not something I heard during my studies of underwater acoustics—but then, we weren’t very concerned with light.
There exist models describing sonoluminescence example Single bubble sonoluminescence is not an exotic phenomenon but can quantitatively be accounted for by applying a few well-known, simple concepts: the Rayleigh–Plesset dynamics of the bubble’s radius, polytropic uniform heating of the gas inside the bubble during collapse, the dissociation of molecular gases, and thermal radiation of the remaining hot noble gas, where its finite opacity (transparency for its own radiation) is essential. A system of equations based on these ingredients correctly describes the widths, shapes, intensities, and spectra of the emitted light pulses, all as a function of the experimentally adjustable parameters, namely, driving pressure, driving frequency, water temperature, and the concentration and type of the dissolved gas. The theory predicts that the pulse width of strongly forced xenon bubbles should show a wavelength dependence, in contrast to argon bubbles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to read some certain band diagrams? I have some problems on understanding some semiconductors band diagram. For instance, I understand quite well something like this: I understand it because it is quite simple: there are the energy levels (Conduction, Fermi, Intrinsic Fermi and Valence) at right and left, and the transition region between them. Quite fine. But sometimes I see something different. For instance, for a Metal Oxide Semiconductor structure, I see something like this: or this in case of a cascade of quantum wells: So, what are all these strange shapes inside the green circles? In the first picture only at right the energy levels are specified, while the shape in the green circle is not specified: what physical quantity is it? In the second picture there is no specification.
Both confusing diagrams show heterostructures (i.e. one device containing different materials), and the different materials can have very different properties. In the first figure: * *At the left is a metal. By definition, a metal has no band gap. So only the Fermi Level is shown. *In the middle (circled in green) is an insulator. The difference between an insulator and a semiconductor is kind of arbitrary. Both have a band gap, and the top and bottom of the circled parallelogram show the conduction band minimum ($E_c$) and valence band maximum ($E_v$), respectively, for the insulator. Note that the band gap is significantly larger than for the semiconductor at the right. There seems to be a voltage applied across the insulator (typical for how these devices are used), so $E_c$ and $E_v$ are not constant. *At the right is a semiconductor, which you understand. In the second figure, only $E_c$ is shown. The discontinuous jumps are where the material changes --- e,g, from GaAs to AlGaAs. The two materials have different band structures, so $E_c$ is discontinuous. The overall downward "slope" of $E_c$ is because some voltage is being put across the device.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Instantaneous power for a variable force I know that instantaneous power is defined as the time derivative of work done. For a constant, it is easy to prove that this is just the dot product of force and velocity. However, is Instantaneous power even equal to to F.V for a variable force, and if so, how do I prove it. I have tried to find this on the internet, but to no avail. I did try differentiating the integral definition of work done by the use of chain ruke and the fundamental theorem of calculus but the answer turned out to be a “regular” product of force and velocity; clearly calculus with vectors is clearly different and I have no experience with it whatsoever.
Given that work is the area under the Force-Distance curve $$W = \int F(t)\,{\rm d}x$$ and power is the time derivative of work, and for each small-displacement increment ${\rm d}x = v\, {\rm d}t$ $$ P = \tfrac{\rm d}{{\rm d}t} W = \tfrac{\rm d}{{\rm d}t} \int F(t)\,v\,{\rm d}t = F(t)\,v $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Magnetic permeability of a mixture How do you calculate the magnetic permeability of a mixture of two substances (e.g. alumina powder and boric acid) knowing the permeability of each one of them?
This is a very difficult theoretical problem and to illustrate its inherent difficulty consider the following two idealized cases: * *take a set of infinitely long but magnetizable ($\mu_r$) cylinders of arbitrary cross sectional shape. Assume that they fill the space at a fraction of $p$ and there is a bias field parallel with the axes of the cylinders. Since the tangential component of the $\mathbf{H}$ field is continuous at the cylinders' boundary the effective permeability must be $$\mu'_{eff} = p\mu_r +(1-p) =1+p(\mu_r-1)$$ *take a set of parallel flat magnetizable sheets and a bias field that is perpendicular them. At a material interface the normal component of the $\mathbf{B}$ field is continuous, hence, the effective permeability satisfies $$\frac{1}{\mu''_{eff}} = p\frac{1}{\mu_r}+1-p = \frac{p+\mu_r (1-p)}{\mu_r}$$ The same material, the same volume/mass fraction, two completely different results. In fact, it can be shown that under the same conditions $\mu'_{eff}$ and are $\mu''_{eff}$ upper and lower bounds for $\mu_{eff}$ of arbitrarily shaped material. Both of these examples show anisotropy (directional dependence), and you could expect similar anisotropic behavior if, say, the particles you mix are not spherical but prolate and there is some preferential orientation in their preparation and mixing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Emission of radiation by a charged particle undergoing acceleration An electron is travelling along the x-axis. It then changes its direction by 45 degrees. Will it emit an electromagnetic wave?
How does the electron change its direction? Something must be responsible for the distraction from its geodesic orbit. The electron in a wire as part of an electric current collides with the subatomic particles of the metal and moves in a zigzag manner. Any acceleration of the electron, be it a stop (negative acceleration) or a gain in kinetic energy, emits photons. This is in detail the reason why the wire has an ohmic resistance. The photon emission of the accelerated electrons makes the wire warm or hot. The electron under the influence of an ion. The electron, when captured be an ion, emits photons. The electron in an external magnetic field. If the electron moves nonparallel to the magnetic field it gets deflected sideways and by this emits photons. This is called the Lorentz force. Exhausting all its kinetic energy it comes to standstill in the centre of its spiral or helical path.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why torque on current carrying circular loop in uniform magnetic field is differs from results of $\mu \times \vec{B}$ if we apply calculus method We have a current carrying circular wire kept in uniform magnetic field $\vec{B}$, as shown, I tried to derive the torque $\vec{\tau}$ acting on it For 2 elemental parts on wire subtending angle $d\theta$ at center right at opposite to each other $$d\vec{\tau} = 2idl B \sin\theta r$$ it gives $$\tau = -2i r^2 \cos \theta$$ on varying $\theta$ from $0$ to $\pi$ I get net torque $\tau = 4ir^2\times B$ but by applying $\tau = I\vec{A}\times\vec {B}$ Note- Here $r$ is radius of wire, there is a factor of 2 because they are 2 elemental parts situated just opposite to each other .Torque on both elemental parts would have same magnitude and would add up ,I replaced $dl$ with $rd\theta$ .Angle $\theta$ is shown in pic. I got another answer $\tau = I\pi r^2 B$, which is correct I don't know why there is so much differences in answers , although both processes look correct
There are actually two cross-products, and you've ignored one of them. I'm going to assume that the magnetic field is in the plane of the loop, pointing along $\mathbf{\hat{y}}$, while the loop is in the $xy-$plane. The infinitesimal torque is given by $$\text{d}\boldsymbol{\tau} = \text{d}\mathbf{F}\times \mathbf{r}.$$ The infinitesimal force is $\text{d}\mathbf{F} = I \text{d}\mathbf{l}\times\mathbf{B}$. As you've pointed out, $$\text{d}\mathbf{F} = I B r\,\,\text{d}\theta \sin{\theta}\, \mathbf{\hat{z}}$$ So far, what you've done seems correct. (You can verify that the net force on the wire is indeed $0$ by integrating over $\theta$.) However, when you calculate the infinitesimal torque, you need to find $$\text{d}\boldsymbol{\tau} = \text{d}\mathbf{F}\times \mathbf{r} = I B r \sin{\theta}\text{d}\theta \mathbf{\hat{z}}\times (r \cos{\theta} \mathbf{\hat{x}} + r\sin{\theta} \mathbf{\hat{y}} ),$$ where in the last step I've written $\mathbf{r} = r \cos{\theta} \mathbf{\hat{x}} + r\sin{\theta} \mathbf{\hat{y}}.$ Expanding this, we have two terms. It can be easily shown (I'll leave it as an exercise) that one of those terms integrates to $0$ as $\theta$ runs from $[0,2\pi)$, and the other term gives you $$\boldsymbol{\tau} = -\mathbf{\hat{x}} I B r^2 \int_0^{2\pi}\sin^2\theta\, \text{d}\theta = -I \pi r^2 B\, \mathbf{\hat{x}},$$ which is exactly what you'd get if you calculated it using $$\boldsymbol{\tau} = I \mathbf{A}\times\mathbf{B} = I \pi r^2 B \mathbf{\hat{z}}\times \mathbf{\hat{y}} = - I \pi r^2 B \mathbf{\hat{x}}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On Validity of the formula for gravitational potential energy The formula for gravitational potential energy, $$-G\frac{m_1 m_2}{R},$$ is found by using the fact that the change in potential energy is equal to negative of the work done ( by conservative forces). One of the assumptions is that the 2nd larger mass remains stationary relative to each other, and thus only the work done on the much smaller has to be taken into account. This is obviously true for something like A satellite and the earth, but what about the case when the masses are similar? Is this formula still when the two gravitating masses are of similar mass? I tried to derive the formula without the assumption by adding a pseudo force on $m_2$ to take $m_1$ at rest. I arrive at the formula $$U(R) = -G \frac{m_2 (m_1 + m_2)}{R},$$ which does actually reduce to the usual formula for $m_1\gg m_2$, but is clearly wrong because of the asymmetry about the 2 masses. Moreover, I cant find any source on any such formula. What am is the error in my reasoning here?
Suppose the (spherically symmetric) bodies are identical and initially separated by distance $R$. We will now take both bodies to infinity synchronously (that is keeping the midpoint in one place – relative to the fixed stars!) If we measure distance $r$ from the centre of mass of the system, that is midway between the centres of the bodies, then the work done on each body taking it from $r=\frac R2$ to infinity is $$\int_{R/2}^\infty\frac{GM_1M_2}{(2r)^2}dr=\frac 14\int_{R/2}^\infty\frac{GM_1M_2}{r^2}dr=\frac 12 \frac{GM_1M_2}{R} $$ So the total work done is $$\frac{GM_1M_2}{R}$$ This is the gain in PE of the system, so its PE in the original configuration was, relative to the PE at infinite separation, $$-\frac{GM_1M_2}{R}$$ With a little more thought we can adapt this treatment to bodies of unequal, but not necessarily hugely unequal, mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How does one (physically) interpret the relationship between the graviton and the vielbein? One can naturally think of the vielbein $e_\mu^a$ as a gauge field corresponding to local translation invariance. Moreover, the metric may be written $$g_{\mu\nu}=e_\mu^a e_\nu^b \eta_{ab}.$$ I have always seen the graviton $h$ given by $$g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}.$$ Obviously, the graviton is the gauge field that carries the force of gravity. So, I suppose that means I could write $$h_{\mu\nu}=e_\mu^a e_\nu^b \eta_{ab}-\eta_{\mu\nu},$$ but my question is really this: how does one (physically) interpret the relationship between the graviton and the vielbein? In particular, I'm interested in how to interpret it from the perspective of quantum fields.
The linearized metric, also known as the Pauli-Fierz field $h_{\mu\nu} =: \frac{1}{\kappa}\left(g_{\mu\nu} - \eta_{\mu\nu}\right)$ can be shown (for example see here: Boulanger et al.) to self-couple perturbatively (to each order in $\kappa$, starting with the cubic one) to the Hilbert-Einstein action expanded in terms of $h$. Using the same Lagrangian BRST method as in Boulanger et al., it is easy to show that if you expand perturbately the Einstein-Cartan action in terms of a linearized vierbein as in Bizdadea et al., pages 23 to 25, then you can safely identify the symmetric part of the linearized vierbein to the Pauli-Fierz field, in other words, the linearized HE action is identical to the linearized EC action and the same to each order in the self-coupling constant. In terms of quantum fields, all physical information is extracted perturbatively only in tree level diagrams of the graviton field. This is treated in Chapter VIII of Zee's QFT book, from page 419 onwards.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why does glass, in spite of being amorphous, often break along very smooth surfaces? When a crystalline material breaks, it often does so along planes in its crystalline structure. As such this is a result of its microscopic structure. When glass breaks however, the shapes along which it breaks are typically very smooth as well, rather than being very irregular or jagged. Being amorphous, one shouldn't expect any smooth surfaces (of more than microscopic size) across which the atoms are bonding more weakly than in other direction to be present at all. One possibility that I can think of is that real glass is locally crystalline, and some surfaces of weaker bonding are actually present in the material, and an ideal glass would behave differently. Another possibility is that unlike in crystalline materials, this is not a result of its microscopic structure, but rather of its macroscopic structure namely its shape: when the glass is hit, it vibrates in a way that is constrained by its shape. We see that harmonic vibrations in a solid typically has very smooth shapes along which the amplitude is 0 (nodal patterns), like in Chladni plates Does anyone know what is the actual reason?
Stress gets concentrated at the tip of a crack or at an inside corner. See this video and this video. Note that in these simulations the crack does not propagate in a perfectly smooth path. In the second simulation, inhomogeneities in the medium affect the propagation direction. If you examine the exposed surface of a conchoidal fracture such as this one, you'll see ripples on a small scale. These can be the result of instabilities in the dynamics of the propagation, and/or the result of inhomogeneities in the medium. The mathematics describing crack propagation can be found here. It's not simple! Generally, all three modes (opening, in plane shear, and out of plane shear) will occur simultaneously.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 3, "answer_id": 1 }
Work Equals Torque? Horsepower, Pulleys While reading one definition of torque, I saw its units are Newton-meter, which is the same as work. But sources usually make it a point to emphasize "even though both work and torque units are the same, they should not be confused, they are very different". One is like an object being pushed with force certain distance, the other force applied to a wrench etc. at certain length, applied around an axis of rotation. But if we think of the pulley seen below, Isn't radius and distance related here? If I rotate a wrench of length $r$ with force $F$ from top position to 90 degree position, isn't the same thing as pushing an object with force $F$ at a distance of $r$?
One of the the big differences between work and torque: Work - work is involved when a force is exerted through a distance and some component of that force is parallel to the displacement of the object that the force is acting on. The SI units of work are Newton-meters. Torque - torque is a force whereby a component of that force is exerted at right angles to a lever arm, and that lever arm is "attempting" to rotate the object it is acting on. The SI units of torque are also Newton-meters. HOWEVER, and this is one good example of how torque differs from work - if I am changing a tire and I am putting torque on one of the lug nuts with my lug wrench in order to remove that tire, I may find that the lug nut is on so tightly that despite putting a LOT of torque on it, it refuses to move. Since there is no motion through a displacement, no work is involved, but torque is STILL being applied to that lug nut.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Do Maxwell's equations contain any information on the time evolution of the current density $J$? The answers to Can the Lorentz force expression be derived from Maxwell's equations? make clear that Maxwell's equations contain only information on the evolution of the fields, and not their effects upon charges; the Lorentz force equation is an added equation. Does this imply that any arbitrary time evolution of a current density can be defined beforehand, and the corresponding fields always found that satisfy Maxwell's equations?
Does this imply that any arbitrary time evolution of current density can be defined beforehand, and the corresponding fields always found that satisfy Maxwell's equations? Yes, given a charge density $\rho(\mathbf r,t)$ and a current density $\mathbf J(\mathbf r,t)$, you can find fields $\mathbf E(\mathbf r,t)$ and $\mathbf B(\mathbf r,t)$ satisfying Maxwell’s equations. See Wikipedia for the integrals giving the scalar potential $\varphi$ and vector potential $\mathbf A$ that solve the nonhomogeneous wave equations with sources $\rho$ and $\mathbf J$. The fields derived from these potentials will satisfy Maxwell’s equations. One way to think about this is that an arbitrary charge and current density can be considered a swarm of moving point charges. The fields of an arbitrarily moving point charge is known, based on the Liénard-Wiechert potentials. The fields of the swarm are simply the superposition of the fields of all the point charges, by the linearity of Maxwell’s equations. ADDENDUM: As @knzhou points out in another answer, the $\rho$ and $\mathbf J$ can’t be completely arbitrary. They have to satisfy the physical constraint of current conservation, $\partial\rho/\partial t+\nabla\cdot\mathbf J=0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Newtonian Limit of Schwarzschild metric The Schwarzschild metric describes the gravity of a spherically symmetric mass $M$ in spherical coordinates: $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1-\frac{2GM}{c^2r}\right)^{-1}dr^2+r^2 \,d\Omega^2 \tag{1}$$ Naively, I would expect the classical Newtonian limit to be $\frac{2GM}{c^2r}\ll1$ (Wikipedia seems to agree), which yields $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1+\frac{2GM}{c^2r}\right)dr^2+r^2 \,d\Omega^2 \tag{2}$$ However, the correct "Newtonian limit" as can be found for example in Carroll's Lectures, eq.(6.29), is $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1+\frac{2GM}{c^2r}\right)\left(dr^2+r^2 \,d\Omega^2\right) \tag{3}$$ Question: Why is the first procedure of obtaining the Newtonian limit from the Schwarzschild solution incorrect?
If $\frac{2GM}{c^2R}<<1$ both expressions are valid as approximations. But the second one presents the expression $dr^2 + r^2 d\Omega^2$ detached. And that is the square of a generic path element in spherical polar coordinates. Being an elementary spatial path, it can be then replaced by: $dx^2 + dy^2 + dz^2$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Are there any quantum effects which we can see in every day life? I am wondering if there are any natural phenomenon in every-day life that cannot be explained by classical physics but can only be explained by quantum mechanics. By classical physics, I mean Newtonian mechanics and Maxwell's electromagnetic theory. I know that there are macro-scale quantum phenomena such as superconductivity, but that isn't something that we can see in ordinary life.
Polarized light is a decent candidate for this, because it is relatively easy to produce and stays coherent over large distances when passing through air. For a demonstration, you only need a laser or LCD monitor and three linearly polarizing filters. Let's say, the light emitted by the monitor is diagonally polarized. By filtering it horizontally, it will only be half as bright, because diagonally polarized light has a 50% horizontal component. The light passing through the filter will then be horizontally polarized. If this light is then filtered by a vertical polarizer, no light will be visible, because all of it is filtered. This is expected by both classical and quantum mechanics, so no surprise yet. But then you introduce another filter in between the horizontal and vertical one. Classically this should not make a difference, because the wave has one polarization, that will at most be filtered more, so still no light will be visible. But this is not what is observed. If the middle filter is set to a diagonal or antidiagonal position, light is transmitted through the whole setup. Quantum mechanics does explain this effect because the horizontal wave is projected onto the diagonal basis and gains a vertical component, that then is transmitted through the last filter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 9, "answer_id": 5 }
Validity of a differing frame of reference than that used in Landau and Lifshitz's solution For the following problem, first problem in chapter 2 (page 16) of Landau and Lifshitz's Classical Mechanics text: I am trying to see whether the picture I drew when originally solving the problem before looking at the solution is valid. The answer would then be $\frac{\cos(\theta_1)}{\cos(\theta_2)}=$ ... Based on the solution, the proper drawing should be Can I get any input on whether the different cases are equivalent? Edit: to address geometric confusion. Put the solution that L & L give us out of mind and just read the problem. There is no reason NOT to draw this setup, right? If we agree on that, my question boils down to a trigonometric one. Why use $\theta_2$ and $\phi_2$ as opposed to $\theta_1$ and $\phi_1$?
Solution The momentum component is to be constrained constant along the Plane$(x)$ and not the Normal$(y)$. This is because the potential energy is independent of $x$. $$U=\begin{cases}U_1&y<0\\U_2&y>0 \end{cases}$$ So, we have the equations \begin{align*} v_1\sin\theta_1&=v_2\sin\phi_1\tag{1}\\ \frac12mv_1^2+U_1&=\frac12mv_2^2+U_2\tag{2}\\ \end{align*} Putting the value of $v_2$ from equation $(1)$ into $(2)$, one gets \begin{align*} \frac12mv_1^2+U_1&=\frac12m\left(v_1\frac{\sin\theta_1}{\sin\phi_1}\right)^2+U_2 \\ (U_1-U_2)&=\frac12mv_1^2\left[\left(\frac{\sin\theta_1}{\sin\phi_1}\right)^2-1\right] \\ \boxed{\frac{\sin\theta_1}{\sin\phi_1}=\sqrt{1+\frac{2}{mv_1^2}(U_1-U_2)}}\tag{1} \end{align*} Thus, we get the relation between the angles. Answer to your question Why use $\theta_2$ and $\phi_2$ as opposed to $\theta_1$ and $\phi_1$? The answer could look different by the use of trigonometric identity $\cos\left(\frac{\pi}2-\alpha\right)=\sin\alpha$ in the numerator and/or denominator of equation $(1)$ but is exactly the same in all the following appearances physically because they are all equal to the same physical quantity. $$\frac{\cos\theta_2}{\cos\phi_2}=\frac{\cos\theta_2}{\sin\phi_1}=\frac{\sin\theta_1}{\cos\phi_2}=\frac{\sin\theta_1}{\sin\phi_1}=\sqrt{1+\frac{2}{mv_1^2}(U_1-U_2)}=\frac{\frac{v_1^{\text{along plane}}}{{v_1}}}{\frac{v_2^{\text{along plane}}}{{v_2}}}$$ Note that the solution of the book has $\theta_1=\theta_1$ and $\theta_2=\phi_1$ with no other angles used in this answer or your figure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability current density confusion As we all know, the probability current density in quantum mechanics is defined as: $$\textbf{J}=\dfrac{\hbar}{2mi}(\Psi^* \nabla \Psi-\Psi \nabla \Psi^*)$$ For simplicity let us work in one dimension and let us suppose a wave function $\Psi= A\ \text{cos}\ {kx}$. Applying the above definition and thus using $$J=\dfrac{\hbar}{2mi}\Big(\Psi^* \dfrac{\partial \Psi}{\partial x}-\Psi \dfrac{\partial \Psi^*}{\partial x}\Big)\quad\quad \text{we get:}\quad\quad J=0$$ Using the equation of continuity this means that: $$\dfrac{\partial \rho}{\partial t}=0,$$ which after solving gives us: $\rho=f(x)$. Thus the probability density at any point is independent of time. Now, this result will follow even if we take $\Psi= A\ \text{cos}\ {(kx-\omega t)}$. But here we can clearly see that the probability density i.e. $$|\Psi|^2=|A|^2\ \text{cos}^2\ {(kx-\omega t)}$$ is time dependent. Is it $A$ which carries the time dependence and is responsible for this apparent discrepancy?
The continuity relation holds for solutions of the Schrodinger equation. $A\cos (\omega t - k x)$ is not a solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Understanding the derivation of the formula for change in internal energy of a gas in an enclosed cylinder The derivation: Let, there is $m$ mole of gas, $p$ amount of pressure, $v$ amount of volume, $T$ amount of temperature, and $U$ amount of internal energy. Now, $dQ$ amount of heat is supplied to this gas so that its internal energy changes by $dU$ and external work done by the gas be $dW$. Also, if the volume of the cylinder is increased by $dV$ amount, work done or $dW = pdV$ So, from the 1st law of thermodynamics we get, $$dQ = dU + dW$$ $$\implies dQ = dU + pdV$$ If volume remains constant, $dV = 0$. So, the equation becomes, $$dQ = dU...(i)$$ Now, we know that keeping the volume constant, if $dQ$ amount of heat is applied to $m$ mole of gas to increase its temperature by $dT$, then the molar specific heat at constant volume, $C_v$, is $$C_v = \frac{dQ}{mdT}$$ $$dQ = mC_vdT$$ Putting the value of $dQ$ from $eq^n (i)$, $$dU = mC_vdT$$ "As the change in internal energy of an ideal gas only depends on the change in temperature and the number of moles, we can use the above equation anytime when the temperature of $m$ mole of gas changes by $dT$; its not necessary for the volume of gas to remain constant for us to use this equation" $-$ this is what my book says and this is the part I don't get. We derived the equation considering the volume constant or $dV=0$, so we can't use this equation when the volume is changing. Am I incorrect?
We derived the equation considering the volume constant or $dV=0$, so we can't use this equation when the volume is changing. Am I incorrect? No, it's not correct. The change in internal energy for an ideal gas, for ANY process, is given by $$ dU = nC_{v}dT$$ This is a consequence of the ideal gas law and the unique relationship between the specific heats for an ideal gas. For example, let's consider a constant pressure process. $$dU=dQ-pdV$$ $$dU=nC_{p}dT – PdV$$ From the ideal gas equation, constant pressure process $$PdV=nRdT$$ Therefore $$ dU=nC_{p}d T – nRdT$$ For an ideal gas $$R=C_p-C_v$$ $$dU=nC_pdT – n(C_p-C_v)dT$$ $$dU=nC_{v}dT$$ Let's consider an adiabatic process. From first law ($dQ=0$) $$dU=-dW=-pdV$$ $$dU=- \frac {nRdT}{1-k}$$ For an ideal gas $$k=\frac{C_p}{C_v}$$ and again $$R=C_p-C_v$$ Therefore $$dU=- \frac{n(C_p-C_v)dT}{1-C_p/C_v}$$ $$dU= nC_vdT$$ Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it possible to derive the Maxwell equations directly from the Electromagnetic tensor? The Lagrangian for ED without Gauge fixing term is given by $$\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu},\quad \text{where}\quad F_{\mu\nu}:=\partial_\mu A_\nu-\partial_\nu A_\mu.$$ I was wondering if this step, defining $F_{\mu\nu}$ over the $4$-potential $A_\mu$, is necessary. Can't we just formulate electrodynamics in terms of the tensor $F_{\mu\nu}$? That is, set $$F_{\mu\nu}:=\begin{bmatrix}0&E_{x}&E_{y}&E_{z}\\-E_{x}&0&-B_{z}&B_{y}\\-E_{y}&B_{z}&0&-B_{x}\\-E_{z}&-B_{y}&B_{x}&0\end{bmatrix}$$ and then derive the Maxwell equations directly from here, without going over the $4$-potential? If this doesn't work, what exactly is the problem?
The second and third Maxwell's equations can be written covariantly as $$ \varepsilon^{\mu \nu \sigma \tau} \partial_{\mu} F_{\nu \sigma} = 0. $$ In gauge geometry, this equation is known as the (Abelian) Bianchi identity (not to be confused with the Bianchi identity from Riemannian geometry, which is related, but different). In Minkowski space-time, any electromagnetic field strength tensor field that satisfies the electromagnetic Bianchi identity can always be written as $$ F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} $$ for some $A_{\mu}$. In the language of differential forms, this is a statement about the 2nd de Rham cohomology of the Minkowski space being trivial: for any $2$-form $F$ such that $dF = 0$, there exists a $1$-form $A$ such that $F = dA$. This proves that treating $F_{\mu \nu}$ as fundamental is completely equivalent to treating $A_{\mu}$ as fundamental (up to gauge transformations).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
By what mechanism is a growing tree root able to lift heavy concrete pavement? A tree root lying under several square meters of 100mm thick concrete pavement can cause the pavement to lift up as it grows. What forces are involved in creating this lift? I vaguely understand that the growth process is a matter of cell division, but my layman brain can't reconcile this with lifting so much weight. I'm not asking for a description of the biology of plant growth, but rather how that growth works at the atomic or molecular level to overcome such pressure, and what forces this process employs. Please apply whatever tags you see fit. I don't even know which -mechanics this is.
The fundamental mechanism is hydrostatic pressure, which in a plant is called turgor pressure: Cell expansion and an increase in turgor pressure is due to inward diffusion of water into the cell, and turgor pressure increases due to the increasing volume of vacuolar sap. A growing root cell's turgor pressure can be up to 0.6 MPa, which is over three times that of a car tire ... As plants can operate at such high pressures, this can explain why they can grow through asphalt and other hard surfaces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Why we add the individuals quantities to find the total amount of a system's "quantity"? Is this by definition of "total"? Why to find e.g. the total energy of a system of particles (non-interacting) we add their individual kinetic energies? Is total kinetic energy defined to be that sum? It may seem obvious for scalar quantities like energy but what if we consider vectors? For example the total momentum of a system of particles is their vector sum of individual momenta. Is this again a definition? I think it is a silly question but I can't understand why we do such "additions". To make the question more clear. I am asking if the momentum/energy/mass of a system is defined to be that sum over all particles. I mean we could define the mass of a system to be: $$M\equiv\frac{1}{2}\sum_{i=1}^{n}m_i$$ But it is not the case. A definition is not right or wrong. It is just a definition.
It is not always true that we do such addition to find the “total”. For example, if you have a system composed of two sub-systems, $A$ and $B$, then the volumes add as you discussed: $v=v_A+v_B$. The masses also add: $m=m_A+m_b$. But the density does not add: $$\rho=\frac{m}{v}=\frac{m_A+m_B}{v_A+v_B}\ne \frac{m_A}{v_A}+\frac{m_B}{v_B}=\rho_A+\rho_B$$ Properties where you add the subsystems together to get the total system property are known as extensive properties. Not all properties behave that way. Unfortunately, I don’t know of a general procedure for identifying a priori whether a given quantity is extensive or not. Usually that information comes from experiment.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do photons follow the geodesic curvature of the gravitational field instead of the spacetime curvature? If mass merely 'curves' spacetime, why do photons follow the geodesic path of the gravitational field (path A) instead of the spacetime curvature itself (path B)? It seems, as if, the gravitational field exerts a continuous pull of the space-time continuum, not a mere bend. Given we are defining a behavior from an external frame of reference, the space-time 'bend' analogy comprises a 'localized' stretch of space as well as a 'localized' stretch of time, but does not seem to comprehend the gravitational 'pull'. I tend to imagine a spaceship traveling through a space-time grid at a constant acceleration of 1 g, I don't see it as 'bending' time or space, but 'pulling' through time and space. In other words, I'm curious of why does it seem more appropriate to define the gravitational force exerted by any given mass (like a planet or a black hole) as a mere 'warp', 'bend' or 'curvature' of the space-time fabric instead of a continuous 'pull' of the space-time fabric itself. Isn't it the four-dimensional nature of gravity more akin to a continuous 'pull' of space-time fabric, rather than a tree-dimensional 'bend'? My concern being the semantics getting in the way of a more comprehensive yet intuitive understanding of space-time and gravity. For the purpose of this question, the gravitational field exerted by the celestial object is stronger than Earth.
I think the lines in the drawing describe the tidal deformation of a local cube, which is not the same as a geodesic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
How can multiplication rule in sigfigs make sense? I've been going through significant figures video on khan academy and it says the product of two numbers cannot have more significant digits than the significant digits in any of the inputs. Example: length = $301 m$ width = $2 m$ area = $301*2 = 602 m^2$ But since the width has only $1$ sigfig, we must round area to $600 m^2$ This seems to convey that area can range between $550$ and $649$. How can the error in area be as large as $50$ while using a $1m$ precision ruler? NOTE The extreme case area: MIN: length = $300.5$ width = $1.5$ area = $450.75$ MAX: length = $301.5$ width = $2.5$ area = $753.75$ How do these min,max areas relate to $600m^2$ ?
How can the error in area be as large as $50$ while using a $1\: \rm m$ precision ruler? Well, because you are multiplying the reading of that "precise" ruler with a large number ($\approx 300$ in your case). So even an error of $0.5 \: \rm m$, could scale itself up and manifest itself as a $\approx 150 \:\rm m^2$ error in the final result. And this is the reason why the maximum and the minimum areas differ from the calculated area by approximately $150 \: \rm m^2$. If you also take into account the variation in the value of length, you could exactly predict the deviation of the minimum and the maximum areas from the calculated value. So, it's quite logical to have a large deviation in such a quantity, and thus to reduce the sigfigs to show the amount of uncertainty. In fact, strictly speaking, even rounding off to one significant digit isn't enough, since the maximum and the minimum areas differ by more than $200 \:\rm m^2$. However, rounding off to the lowest number of sigfigs in either of the initial quantities, works as nice tool/trick to give a vague, albeit not precise, idea of uncertainty.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does it mean for a physical phenomenon to be "fundamentally random"? I read on Wikipedia: Quantum mechanics predicts that certain physical phenomena, such as the nuclear decay of atoms, are fundamentally random and cannot, in principle, be predicted. What does that mean exactly? I thought nothing can be predicted with arbitrary precision. Yet, we still often model physical phenomena to follow some statistical distribution. Does the above perhaps imply that nuclear decay is (more) uniformly random, than other physical phenomena? Or perhaps that it is statistically more independent, in terms of its Markov blanket, than other physical phenomena? i.e. less predictable than other physical phenomena, provided other knowledge?
In you case of radioactive decay, it means that the times of decay of a sample of radioactive material occur completely randomly. The sample will have quite a lot of radioactive nuclei. When a single nucleus decays is random. Decay may occur early or late, there is no way to predict which. After an x second measurement, you'll find some decays were early and some, from the same sample, is late. The decay history of a sample will have been determined. After the fact, that is after the random decays have been measured, we can calculate the properties like half-life and lifetimes. while a 2nd measurement will have the same properties the actual decay times can not be predicted because they are random.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Will Pascal's law apply to two immiscible liquids that have the same density? In other words, does Pascal's law apply to emulsions? As far as I know, immiscible liquids form emulsions when mixed, so will Pascal's law apply to this emulsion?
To a good approximation yes. Pascal's law won't hold exactly for an emulsion because the interface between the two fluids will have a non-zero interfacial tension, and there will be small excess pressure inside the emulsion droplets given by the well known formula: $$ \Delta P = \frac{2S}{r} $$ where $S$ is the interfacial tension and $r$ is the radius of the drop. However for most emulsions this will cause only a relatively small pressure difference. For most purposes we can take the pressure as the same everywhere in the fluid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Comparing a 100W and a 40W light bulb that only emits a specfic frequency I'm sort of confused by this... let's just say the bulbs only emits green light, and we compare a 40 W and 100 W bulb (identical except one is brighter than the other), since the frequency of light emitted from both bulbs are the same, does it mean that there is more energy packed per photon in the light emitted from the 100 W bulb?
Every photon of a particular frequency has the same energy. This means that a 100W source of photons of a specific frequency will emit more photons per second than a 40W source of photons at the same frequency.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why is electric field inside a conductor non-zero even if there is point charge placed inside it? If I place a point charge $q$ inside a conductor, The electric field at any point inside it will be non zero ($Kq/x^{2}$). If we draw a Gaussian surface inside the conductor, the net enclosed charge will be $q$ that will provide an outgoing flux. Then why do we say that the electric field inside a conductor is always 0?
The free charges will reposition in such a way that the field vanishes, because in a conductor there is no other force acting on them. This also means that any point charge inside a closed surface will be compensated by conducting electrons. All of this only holds at scale much larger than atomic . At atomic scale the electric field is not zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
What is “spectral turnover”? In a paper which deals with the spectra of radio frequency cosmic events, the word “spectral turnover” is used. What is “spectral turnover”?
It just means, that when you look at the spectrum on a log-log plot, that it become flat or even reverses slope, in contrast to a steeper power law spectrum at higher and/or lower frequencies. There are various physical reasons why a turnover might appear in the spectrum (e.g. a flattening in the underlying power-law distribution of electrons producing synchrotron emission or synchrotron self-absorption). I was quickly able to find the following sketch (from this website) of a "GHz-peaked radio galaxy", which illustrates and explains the situation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Work done when lifting an object at constant speed A previous post (What Is Energy? Where did it come from?) defines work qualitatively as "a process in which energy is transformed from one form to another form". And mathematically, work is defined as: $$\Delta KE=\int_{C} \vec{F}\cdot\mathrm d\vec{r}$$ But if you imagine lifting up a rock from the ground at constant speed, am I not doing work on the rock by converting the chemical energy stored in my muscles into the potential energy of the rock? I am confused because the kinetic energy of the rock does not change and yet I am still converting energy from one form to another, which is the qualitative definition of work. What's the right way to think about this and the concept of work in general?
Assume you have two opposite forces of the same magnitude acting on a particle. The total work is zero, and there is no change in kinetic energy. However one of the forces made positive work on the particle and the other negative work. Whatever did positive work lost some form of energy, and the one that did negative work won some energy. The net effect on the particle is zero, but there was an exchange of energy between two systems, the ones which generated the forces. Alternatively, if one of the forces is dissipative, like friction, some energy will be dissipated as heat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 2 }
Will cutting sand paper with scissors make the scissors sharper or duller? This is a little question that I have been wondering when I need to cut sand paper with scissors. Sand paper can be used to sharpen knives etc. when applied parallel with the blade surface. Also it can be used to dull sharp edges when applied nonparallel with the blade surface. My assumption is that it should dull the scissors since paper is being cut using the sharp edge and nonparallel with the abrasive material. But I still have doubts about the validity of the assumption. How is it?
The answer given by Duller is halfway correct.When the blade is very blunt , the point of contact is flattened or shapeless. When you cut sandpaper , it being abrasive itself it abrades the blunt edge at the cutting point , this making the scissors sharper than before. The finer the sandpaper the better the sharpening. However this is not meant for light or sharp scissors. It is only meant for scissors which have started bending the cutting material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 2 }
Confusion over spontaneous symmetry breaking Consider a complex scalar field with Lagrangian $$\mathcal{L} = (\partial_{\mu} \bar{\phi})(\partial^{\mu} \phi) - V(\phi)$$ with potential $$V(\phi) = \frac{1}{4}\lambda(\bar{\phi}\phi - \eta^2)^2$$ The model is invariant under global $U(1)$ phase transformations. The minima of the potential lie on the circle $|\phi| = \eta$, and so the vacuum is characterized by a non-zero expectation value: $$\langle 0|\phi|0\rangle = \eta e^{i\theta}.$$ Now, here is where my confusion lies. The $U(1)$ phase transformation would change the phase of the ground state into $\theta + \alpha$ for some constant $\alpha$. If the symmetry were still manifest, then we would not have found this and instead returned to $\theta$ alone; therefore, the symmetry is broken. However, the broken symmetry vacua with different values of $\theta$ are all equivalent. So, what would it matter if considered $\theta + \alpha$ as opposed to $\theta$ as surely the two represent equivalent vacua? If this is the case, then why is the phase transformation not a symmetry of the vacuum, if it works only to move me to an equivalent configuration? What am I missing?
Even though this question has been successfully answered already, I just wanted to emphasize some points about spontaneous symmetry breaking. When a symmetry is 'spontaneously broken', it is not true that it is no longer a symmetry of the theory, as is so commonly implied in textbooks. Indeed, the broken symmetry is still represented (anti)unitarily on states. The important difference between broken and unbroken scenarios is the spectrum of states. When a symmetry is unbroken, there is a single vacuum that is invariant with one tower of states given by exciting the vacuum. When a symmetry is broken, there are many towers of states, each associated with a different vacuum that corresponds to a different 'orientation' (in your case a different $\theta$). If we find ourselves on one tower, and apply a broken symmetry transformation, we jump to a different tower. The symmetry is called broken because, as Quillo said, when the theory is realized in nature, one tower of states is chosen. We don't see the other towers and so there's no way to directly observe the symmetry (of course we can do so indirectly through goldstone bosons).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Common misunderstanding of Birkhoff Theorem I just found a paper "On a common misunderstanding of the Birkhoff theorem". This means that inside a spherically symmetric thin shell there is no gravitational force, BUT there is time dilation, and so the interior solution is NOT the Minkowski metric, right?
In GR you use coordinate charts which cover some specified open region of spacetime. You are allowed to choose the region that you want to cover, and it may be all of spacetime or it may be some smaller region. For a spherically symmetric thin shell, if the chosen region is strictly inside the shell then it is the Minkowski metric. If the region is strictly outside the shell then it is the Schwarzschild metric. If the region contains the shell then it is more complicated, and this complicated region is the focus of the paper. There is gravitational time dilation between an observer on the interior of the shell and one exterior to the shell, but this is only relevant if you are covering both the interior and the exterior. Strictly within the interior it is not relevant and the straight Minkowski metric is valid strictly in the interior.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the point of a voltage divider if you can't drive anything with it? The voltage divider formula is only valid if there is no current drawn across the output voltage, so how could they be used practically? Since using the voltage for anything would require drawing current, that would invalidate the formula. So what's the point; how can they be used?
Just came up today. Needed a comparator that would trip at 2.8V. Had a 3.3V supply. So, voltage divider of 3.32k/17.8k driving one input of comparator (current load is negligible), test voltage to other input. Just everyday EE.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 7 }
Is magnetic field due to current carrying circular coil, zero everywhere except at its axis? Consider a current ($I$) carrying circular coil of radius$ R$ of $N$ turns.Consider a rectangular loop $ABCD$,where length $AB=CD=\infty$ Performing the integral for axial points, $$\int_ {-\infty}^{\infty}\vec{B}\cdot \vec{dx}=\int_ {-\infty}^{\infty} \frac{\mu_0INR^2dx}{2(R^2+x^2)^{3/2}}=\mu_0IN=\int_ {C}^{D}\vec{B}\cdot \vec{dl}\tag{1}$$ Now applying Ampere's law on loop ABCD, $$\int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {C}^{D}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}=\mu_0NI\tag{2}$$ $$\Leftrightarrow \int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}=0\tag{3}$$ My book writes that "Apart from the side along the axis,the integral $\int\vec{B}\cdot\vec{dl}$ along all three sides will be zero since $B=0$". I don't quite get this. Magnetic field lines due to a coil are like, Now, the question, Is magnetic field due to current carrying circular wire zero everywhere except at its axis? Why exactly $$ \int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}\tag{4}$$ is zero?
Not only must you assume that lengths AB and CD are infinite, but also BC and DA. So the field strength is zero all along DA, AB and BC for the rather unsubtle reason that these three sides are all an infinite distance from the current-carrying loop (whose field falls off as $r^{-3}$ and faster).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Do Maxwell's Equations depend upon an orientation of space? Even when we cast Maxwell's Equations in as coordinate-independent form as possible, $dF=0$ and $d \star F = J$, we still have to make use of the Hodge star $\star$ which is defined relative to an orientation. It doesn't look like the equations are preserved under orientation-reversal. But does space have an orientation, or is it just an artefact? It seems unphysical. I thought all classical theories were invariant under parity. Thanks.
It is only necessary that the space of solutions is preserved. (And, after all, elementary particle reactions do not preserve parity.) But the physical solutions to Maxwell's equations are unchanged if you choose a different orientation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Question on Feynman's Proof of Energy Conservation Since this is two-dimensional motion, why would $m(dv/dt)$ not have some directionality in addition to being the rate of change of the magnitude of momentum? Is Feynman assuming that the directionality doesn't change in infinitesimal time?
If $v$ is treated as the velocity instead of the speed of the object then $m(dv/dt)$ has direction and would be simply the rate of change of momentum, rather than the rate of change of the magnitude of the momentum. The reason for this imprecision probably stems from the two definitions of kinetic energy: it can be calculated as $\frac{1}{2}mv^2$, where $v$ is the speed of the object, or $\frac{1}{2}m(\textbf v\cdot\textbf v)$ where $\textbf v$ is the velocity of the object. If we use the first definition then the rate of change of kinetic energy is dependent on $m(dv/dt)$, which is directionless. If we use the second definition then it is dependent on $m(d\textbf v/dt)$ which has direction, and is therefore not the rate of change of the magnitude of the momentum, but simply the rate of change of the momentum. Feynman does not use vector analysis till later in the chapter, so he is precise in saying that $m(dv/dt)$ is the rate of change of the magnitude of the momentum. But shouldn't he then write the "magnitude of the force in the direction of motion", instead of the "force in the direction of motion"? No. Since a component of a vector in a particular direction is a scalar, so it is superfluous to say the magnitude of a scalar. Hopefully this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do electrons have fixed energy levels? I understand that electrons do not orbit the nucleus, instead they have a higher probability to be found at some specific regions. But what makes they appear more frequently in the orbital regions? There are equations (like Schrödinger) that are able to describe this wave function, but what causes it?
Nobody knows the answer. All we know is that quantum mechanics is in perfect agreement with experiment. All you can do is to critically investigate any intuitive concepts that you may have that are incompatible with quantum mechanics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Conservation Laws and What Happens if they go Wrong? I read this excellent article on the Conservation Laws and also I was taught in Schools that Conservation Laws cannot be proven and only verified. I was wondering what would actually happen if a Conservation Law turned out to be false? I know it would question our measurements as well as our calculations as we use them almost unknowingly everywhere like unless stated every mechanics problem has the mass taken to be conserved so let's say the laws hold true here but break in the boundary cases as most things in Physics do like break when we approach the speed of light or the edge of the universe or some other drastic condition. Are there any good discussions on what consequences it may have?
My volume of Symon's Mechanics address' this question: The conservation laws are in a sense not laws at all, but postulates which we insist must hold in any physical theory. If, for example, for moving charged particles, we find that the total energy, defined as (T + V) [kinetic plus potential], is not constant, we do not abandon the law, but change its meaning by redefining energy to include electromagnetic energy in such a way as to preserve the law. We prefer always to look for quantities which are conserved, and agree to apply the names 'total energy', 'total momentum', and 'total angular momentum' only to such quantities. The conservation of these quantities is then not a physical fact, but a consequence of our determination to define them in this way. It is, of course, a statement of physical fact, which may or may not be true, to assert that such definitions of energy, momentum and angular momentum can always be found. The assertion, has so far been true... A further example would be the combination of the conservation of mass and conservation of energy of classical mechanics into the conservation of total relativistic energy in Special Relativity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Work done on object being carried upwards If you carry a book in your hands, and you walk up stairs with a change in height of $h$, the net work on both you and the book would be $-M_{\mathrm{total}}gh$ since $W = - \Delta U$. This would be due to gravity. However, when considering the book alone, the work done by the normal force, i.e. your hands, would be $M_{\mathrm{book}}gh$. Furthermore, the work done by gravity solely on the book would be $-M_{\mathrm{book}}gh$. This means that the net work done on the book through the process of walking up the stairs is $0$. Since work is equal to negative change in gravitational potential energy this means that the change in GPE of the book is $0$? But then doesn't the book have a change in gravitational potential energy of $M_{\mathrm{book}}gh$? Am I missing something regarding the kinetic energy of the book?
Yes , you are . Net work done on an object is equal to change in kinetic energy and not in potential energy. $\therefore$$W_g+W_n=0$ since the book is rest from your frame .I am assuming you hold the book still during your climb. $W_g=-W_n$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/569042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Whats the center of gravity/mass of this figure? I've tried figuring this out, but I can't seem to find a way to find it, since I dont have any other information to calculate something. The only thing I could find out was that its possible to use symmetries to find out the center of mass/gravity. My intuition tells me it should be C however I am not sure. Is there any way to check it?
It can be shown by exclusion: A and D are not possible, because they are to far of. You can draw a ellipsoid around B, which contains the area left of B and is symmetric. Still, you are left with the two tails on the right. only C is left, therefore its C. Maybe one can use the fact, that this figure is "stackable" to archive a more rigorous proof of that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Do adjacent sarcomeres oppose each other during contraction? A sarcomere is the contractible portion of the muscle cell. And here is a figure of three sarcomeres in series before and after contraction: I was taught that the thick fiber, myosin, pulls on the thin fiber, actin. I am confused as to how contraction can happen because it seems that there is a tug of war going on between myosins on either side of the Z-line. Is there another force vector I am not accounting for? Or is there some additional biophysics going on that I am not aware of? Edit: I guess it could all contract if the outermost sarcomeres had a weaker opposing tension than the internal tension. But it would start from the outside and radiate in. In other words, there would be a gradient of contraction with the shortest (most contracted) to longest from outside to inside until all are equally contracted. I'm not sure if that's how it works in reality.
I think your two pictures show how the whole system can contract. Maybe abstract it as a bunch of springs and balls in series. s = spring M = myosin (imagine it as a ball localized at the M line) A = actin (imagine it as a ball localized at Z line) Your first pic looks to me like ...MsAsMsAsM... and the springs are all stretched past their equilibrium point (i.e. they all "want" to contract). Then it seems pretty clear that the whole system will shorten as all the springs contract. You can write it all out mathematically with F=ma for each mass and see if your intuition in your Edit is correct -- I'm guessing it is.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Different variations of covariant derivative product rule This is a follow-up question to the accepted answer to this question: Leibniz Rule for Covariant derivatives The standard Leibniz rule for covariant derivatives is $$\nabla(T\otimes S)=\nabla T\otimes S+T\otimes\nabla S$$ so for $T\otimes\omega\otimes Y$ this would translate to $$\nabla(T\otimes\omega\otimes Y)=(\nabla T)\otimes(\omega\otimes Y)+T\otimes(\nabla\omega\otimes Y)+T\otimes(\omega\otimes\nabla Y).$$ My question is: given a vector field $X$, how do I get from the above that $$\nabla_X(T\otimes\omega\otimes Y)=(\nabla_X T\otimes\omega\otimes Y)+T\otimes\nabla_X\omega\otimes Y+T\otimes\omega\otimes\nabla_XY$$ as written in that answer?
In a chart $U_\alpha : M \rightarrow \mathbb{R}^n$, you have $\nabla_X = X^\mu \nabla_\mu$ so the result follows by patching on overlapping charts. The last answer in the question cited gives all the details required to be honest.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is there an equivalent of computation of physical processes in nature? I was watching a waterfall in the Austrian Alps. There were thousands of water droplets falling down, splattering on the stones below. I thought - how does nature find out so quickly where each droplet of water should go? To find out what happens to a falling droplet of water, one can use the laws of motion and calculate the trajectory. To calculate, one needs some amount of time, some machine (the brain, computer) and some energy to feed the machine. Does it make sense to ask what is the equivalent of this computation in nature? How does nature find out so quickly how things should move? More generally, to find out how anything should happen? Where is the "calculation" in nature? There's no room or energy for a machine in the particles that make up things.
Nobody knows, and perhaps nobody will ever know. The only things that we can possibly know about the universe are the things that we can somehow observe, directly or indirectly. There's no way to gain information about how the universe operates other than observing its operation and thinking of rules that seem to explain what we have observed. As you know, these rules are called "laws of physics". Nobody has ever observed anything that seems to provide any information about the computational mechanisms of the universe. It's likely that nobody ever will make any such observations. As a result, we don't know, and may never know, what those computational mechanisms are, or even whether or not there are any computational mechanisms at all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 12, "answer_id": 2 }
Why do complex number seem to be so helpful in real-world problems? Complex numbers are often used in Physics especially in Electrical Circuits to analyze them as they are easy to move around like phasors. They make the processes easy but it seems kind of amusing to use something which has no other real world analogous meaning to my knowledge being used to solve the most practical real world problems. What other method were used prior to having developed complex numbers and why were they replaced? For example, can every problem where we use complex numbers also be done using other techniques such as matrices, how did the insight come to use such an obscure entity, or did doing the operations just seem easy with it?
Physics is replete with second-order differential equations, resulting e.g. from an action of the form $\int L(t,\,q,\,\dot{q})dt$ being stationary. If they are linear, or we approximate them as such close to an equilibrium, we get something like $\ddot{q}=Aq$ for some constant $A$, which is typically nonzero. A stable equilibrium requires $A<0$, in which case the functions $\exp\pm i\sqrt{|A|}t$ span the solution set. So the real question is why equilibria are often stable. Presumably, it's because they minimize energy rather than maximizing it. For example, a pendulum has a low-energy stable equilibrium & a high-energy unstable equilibrium, the latter achieved by inverting it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
What do I need to build special relativity? If I postulate the principle of relativity and the constancy of the speed of light for every inertial observer can I then prove all SR? Or do I need some other postulate? For example: do I need to also postulate the structure of the Lorentz transformations or the Lorentz transformations derive completely only from this two basic postulates. (Do I have to also postulate, for example, that the transformation are linear to prove them from the two starting postulates?)
By more modern standards, if you are looking for a proof (and not just handwaving arguments), you'll need to clearly state assumptions about the "space[time]" and other mathematical structures that model the physics that you are using as your starting point, and formulate your postulates in precise terms with those structures. (Don't assume that "we all know that THIS [term] means THAT". Clearly state the assumptions... A successful proof rests on the details.) You might find enlightening this diagram describing various pathways to the Lorentz Transformations. (Sorry I don't have a nicer scan.) (from "Spacetime and electromagnetism : an essay on the philosophy of the special theory of relativity" by J R Lucas & P E Hodgson, Oxford University Press, 1990.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Optimizing engines to produce a certain torque and net force Say we have $n$ engines sitting on a rigid body. Each engine has position $R_i$ and points in a certain direction and generates a force in that direction, $F_i$. The magnitude of that force ($k_i$) follows this constraint: $0 \leq k_i \leq m_i$. In other words, the engine can only output so much force. I want to be able to find the values of $k$ that would bring the resultant force $F$ and torque $T$ closest to a desired value. Based on the resultant force equation, I know that $$F = \sum_{i=1}^{n} (k_iF_i)$$ and the torque: $$T = \sum_{i=1}^{n} (R_i) \times (k_iF_i)$$ where $R_i$ is the position of the engine relative to the point of application of the resultant force. However, I'm unsure of how to best proceed from here. edit: I solved the problem one way using linear programming and minimizing the absolute value of the difference between each component of the ideal and actual force/torque, not optimizing for fuel. This allowed me to put constraints on the engine's force magnitude.
The problem is one of least squares (until the point where the magnitude is capped). Consider the target force vector $\vec{F}$ and the target moment vector $\vec{T}$ as the right-hand side $\boldsymbol{b}$ of a linear system of equations, and the vector $\boldsymbol{x}$ of $n$ force magnitudes is the unknowns. $$ \mathbf{A}\;\boldsymbol{x} = \boldsymbol{b} $$ $$ [\mathbf{A}]_{6\times n}\; \begin{pmatrix}F_{1}\\ F_{2}\\ \vdots\\ F_{n} \end{pmatrix}_{n\times1} = \begin{pmatrix}\vec{F}\\ \vec{T} \end{pmatrix}_{6\times1} \tag{1}$$ We will get later into what the coefficient matrix $\mathbf{A}$ is. For now consider a case where $n \geq 6$, and the solution is given by $$ \boldsymbol{x} = \mathbf{A}^\top \left( \mathbf{A} \mathbf{A}^\top \right)^{-1} \boldsymbol{b} \tag{2}$$ Where $^\top$ is the matrix transpose. So what is $\mathbf{A}$? There are 6 rows and $n$ columns to this matrix, and the first 3 rows is filled with all $n$ force direction vectors $\vec{z}_i$, and the last 3 rows with all $n$ torque directions $\vec{r}_i \times \vec{z}_i$. $$ \mathbf{A} = \begin{bmatrix}\vec{z}_{1} & \vec{z}_{2} & \cdots & \vec{z}_{n}\\ \vec{r}_{1}\times\vec{z}_{1} & \vec{r}_{2}\times\vec{z}_{2} & \cdots & \vec{r}_{n}\times\vec{z}_{n} \end{bmatrix}_{6\times n} \tag{3}$$ The result isn't guaranteed to be within the force limits, but it will be the least possible force system overall. Reduced Example Consider a planar example (for simplicity with 3 DOF instead of 6) with $n=4$ forces arranged in a rectangle of size $a$, $b$, and each direction pointing to the next force location. $$\begin{aligned} \vec{r}_1 &= \pmatrix{-\tfrac{a}{2} \\ -\tfrac{b}{2} } & \vec{z}_1 &= \pmatrix{1\\0} & \vec{r}_1 \times \vec{z}_1 = \tfrac{b}{2} \\ \vec{r}_2 &= \pmatrix{ \tfrac{a}{2} \\ -\tfrac{b}{2} } & \vec{z}_2 &= \pmatrix{0\\1} & \vec{r}_2 \times \vec{z}_2 = \tfrac{a}{2}\\ \vec{r}_3 &= \pmatrix{ \tfrac{a}{2} \\ \tfrac{b}{2} } & \vec{z}_3 &= \pmatrix{-1\\0} & \vec{r}_3 \times \vec{z}_3 = \tfrac{b}{2} \\ \vec{r}_4 &= \pmatrix{-\tfrac{a}{2} \\ \tfrac{b}{2} } & \vec{z}_4 &= \pmatrix{0\\-1} & \vec{r}_4 \times \vec{z}_4 = \tfrac{a}{2} \\ \end{aligned} $$ with the target force $\vec{F}= \pmatrix{3 \\ 2} $ and moment $T=\pmatrix{1}$ $$ \boldsymbol{b} = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} $$ The coefficient matrix is composed from (3) $$ \mathbf{A} = \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ \tfrac{b}{2} & \tfrac{a}{2} & \tfrac{b}{2} & \tfrac{a}{2}\end{bmatrix} $$ and solution from (2) $$ \pmatrix{F_1 \\ F_2 \\ F_3 \\ F_4} = \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ \tfrac{b}{2} & \tfrac{a}{2} & \tfrac{b}{2} & \tfrac{a}{2}\end{bmatrix}^\top \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \tfrac{a^2+b^2}{2} \end{bmatrix} ^{-1} \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} = \pmatrix{\tfrac{3}{2} + \tfrac{b}{a^2+b^2} \\ 1+\tfrac{a}{a^2+b^2} \\ -\tfrac{3}{2}+\tfrac{b}{a^2+b^2} \\ -1+\tfrac{a}{a^2+b^2}} $$ Let us check the result $$ \vec{F}= F_1 \vec{z}_1 + F_2 \vec{z}_2 + F_3 \vec{z}_3 + F_4 \vec{z}_4 = \pmatrix{3\\2} \; \checkmark$$ $$ \vec{T} =F_1 (\vec{r}_1 \times \vec{z}_1) + F_2 (\vec{r}_2 \times \vec{z}_2) + F_3 (\vec{r}_3 \times \vec{z}_3) + F_4 (\vec{r}_4 \times \vec{z}_4) = \pmatrix{1} \; \checkmark$$ This method also solves the "Find the forces of the four legs of a table" problem given an arbitrary load on the table surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does one divide by a vector when calculating pressure? Sir $P=F/A$ And since $F$ and $A$ both are vectors but $P$ is scalar. So doesn't it violates that "Division is NOT defined for vectors"?
Pressure is actually $$P=\frac{F_\bot}{A}$$ where $F_\bot$ is the force component perpendicular to the surface in question, and $A$ is the area of the surface. Therefore, there is no "division by a vector" here. Certainly, the area vector is used in various areas of physics; this is not one of those areas (pun always intended). I suppose if you wanted a definition based on vectors you can exploit the use of projections: $$P=\frac{\mathbf F\cdot\mathbf A}{||\mathbf A||^2}$$ since the area vector, by definition, is perpendicular to the surface in question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What does the statement "A concave mirror always forms a real image of a virtual object" mean? My Physics teacher made the statement in a recent class. "A concave mirror always forms a real image of a virtual object" But, what did he mean by a virtual object? What does this statement exactly mean? PS : I have just started learning geometric optics.
A real image is formed when the rays converge in one point. That means, if an object emits light, and there is an optical system that makes those rays converge, then there is a real image where those rays converge. On the other hand, a virtual image is formed if the rays do not converge, but their prolongations do. This usually happens if the rays diverge, but if you extend them backwards, their extensions converge. Real images can be projected on a screen, but they cannot be seen with the eye (you can see them on screens, not looking directly to them), think of a projector. On the contrary, virtual images are not projected, but they can be seen with the eye (think of a magnifying glass) These are important basic concepts. Once you understand them well, it follows that, if you have a system made of several instruments, the image created by the first one acts as the object of the second one. So if the instrument 1 makes a virtual image, that virtual image is the object for the second object. So you can have a virtual object for a mirror, if something's creating a virtual image before the mirror.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine the result of 2D elastic collision As shown by the image, a disk of radius $R_1$ mass $M_1$ and initial velocity $V_0$ collides with another still disk of radius $R_2$ mass $M_2$. Both disks has no rotation initially. The direction of $V_0$ is indicated by $\theta$. For three situations there are unique solutions: * *When $\theta = 0$, the problem becomes 1D, and both disks has no rotation afterword. *When there is no friction, both disks has no rotation afterward, and the still disk gain a speed $V_2 = 2 V_0 \frac{\cos \theta}{1 + \frac{M_2}{M_1}}$ along N. *When $\theta$ is sufficiently large so that $f = \mu_0 N$, in which $\mu_0$ is the static frictional coefficient. In this case, the momentum transfer along N is $\mu_0$-fold of the momentum transfer along $f$. Both disks rotate but in opposite direction afterward. The solution for $V_2$ in the $N$ direction afterward is $$V_2 = 2 V_0 \frac{\cos \theta + \mu_0 \sin \theta}{(1 + \frac{M_2}{M_1})(1 + 3 \mu_0^2)}$$ In the case when $\theta$ is small, how to find a unique solution? Newton mechanics should have unique solution in all cases. And experimentally the outcome should not be random. So what constrain did I miss?
I will post an answer myself, and we can discuss if it is correct. First, I argue that this problem has a unique and deterministic answer: * *If you play pool, you will know that the balls after collision do not have random movement. Two identical hits will produce two identical results. *This is also required by Newton mechanism. The normal force $N$ causes a change of linear momentum $\Delta P_N = N \Delta t$ along $N$, and results in a speed $v_2$ along $N$. Without friction, the speed perpendicular to $N$ which is $v_0 \cos \theta$ is not affected, so that the collision is 1D without causing either disk to rotate, or $v_2= \frac{2 v_0 \cos \theta}{1 + m_2/m_1}$. The touching points between the two disks has relative speed difference at $\theta$, so that the effect of friction can be captured by $\gamma=max(\tan \theta,μ_0)$, in which $μ_0$ is the static frictional coefficient. It is assumed that the touching points between the two disks have no relative motions during collision. For disk 2, the frictional force $f$ causes a change of linear momentum $\gamma N \Delta t = \gamma ∆P_N$ perpendicular to $N$, and results in a speed $\gamma v_2$ perpendicular to $N$. The frictional force also causes an angular momentum $\gamma N r_2 ∆t = \gamma \Delta P_N r_2 = \frac{1}{2} m_2 r_2^2 \omega_2$, and results in an angular speed of $\omega_2 = 2 \gamma \frac{v_2}{r_2}$. For disk 1, the friction force introduces an angular speed of $\omega_1 = \frac{m_2}{m_1} 2 \gamma \frac{v_2}{r_1}$, and linear speed of $v_0 \cos⁡ \theta - \frac{m_2}{m_1} v_2$ along $N$ and $v_0 \sin \theta - \gamma \frac{m_2}{m_1} v_2$ perpendicular to $N$. For elastic collision: $v_2 = 2 v_0 \frac{\cos \theta + \gamma \sin \theta}{(1 + \frac{m_2}{m_1})(1 + 3 \gamma^2)}$, which is independent of $r_1$ and $r_2$. If the two disks has arbitrary initial angular momentum $\Omega_1$ and $\Omega_2$, the initial relative speed at the touching point is $v_0 \cos \theta$ along $N$, and $v_0 \sin \theta - \Omega_1 r_1 + \Omega_2 r_2$ perpendicular to $N$. Thus, $\gamma = \frac{\sin \theta - \frac{\Omega_1 r_1}{v_0} + \frac{\Omega_2 r_2}{v_0}}{\cos \theta}$: * *If $\gamma > 0$, it is capped by $+\mu$. *If $\gamma < 0$, it is floored by $-\mu$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $Q=p$, $P=-q$ a canonical transformation from the perspective of 2 variational principles satisfying boundary conditions? This is to ask a more general question: Landau-Lifshitz say that for the variational principles $$\delta\int_{t_1}^{t_2}p\mathrm{d}q-H\mathrm{d}t =0$$$$ \delta\int_{t_1}^{t_2}P\mathrm{d}Q-H'\mathrm{d}t =0$$ to be equivalent, the difference $(p\mathrm{d}q-H\mathrm{d}t)-(P\mathrm{d}Q-H'\mathrm{d}t)$ must equal the differential of a certain function $F$ of $q,p$, and $t$. I would agree if $F$ were a function of $q$ and $t$ alone since the variational principles above are taken among $q,p$ such that $\delta q(t_i)=0$, $i=1,2$ so adding $\mathrm{d}F$ to the integrand adds a constant to the integral and the variational principle doesn't change. But in the case where it can depend on $p$ I don't see why. For instance in the case $Q=p$, $P=-q$, why would $\int_{t_1}^{t_2} \mathrm{d}(pq)$ be a constant?
It is easy to check that OP's canonical transformation (CT) in the title has generator $F_1=Qq$. OP's main question about boundary conditions (BCs) for CTs were asked before in great generality in this & this related Phys.SE posts. The latter post proved how pertinent BCs are satisfied for CTs of type 1.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Confusion on clock synchronization I am reading through Polonyi's Classical Field Theory notes. It begins with a discussion of special relativity, and near the bottom of page 3, it introduces a procedure for synchronizing clocks. I've reproduced the paragraph below. Let us suppose that we can introduce a coordinate system by means of meter rods that characterize points in space and all are in rest. Then we place a clock at each space point which will be synchronized in the following manner. We pick the clock at one point, $x = 0$ in Fig. 1, as a reference, its finger being used to construct the flow of time at $x=0$, the time variable of its world line. Suppose that we want now to set the clock at point $y$. We first place a mirror on this clock and then emit a light signal which propagates with the speed of light according to assumption 2' from our reference point at time $t_0$ and measure the time $t_1$ when it arrives back from $y$. The clock at $y$ should show the time $(t_1 - t_0)/2$ when the light has just reached. I'm very rusty on all this, but I don't understand how such a procedure could work as stated. Both time measurements occur at $x=0$ and the information would need to be transported to $y$ which takes time.
Yes, it takes an unknown amount of time to arrive, but the clock at $y$ already runs at the correct rate; you only need to set its zero point. So what you do is record the actual reading $t_2$ on the clock at $y$ when the light reflects off the mirror, and then whenever the value $(t_1-t_0)/2$ arrives at $y$, add $(t_1-t_0)/2-t_2$ to the current reading on the clock. The notes should have made it clearer that you need to record more than just $t_0$ and $t_1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
At higher temperatures, do metal wires have a greater chance of short circuiting? I’ve read that short circuiting means that a huge amount of current flows through the conductor in an extremely small period of time. Also, resistance increases with temperature for conductors, right? So if temperature increases, resistance increases. Which would mean the current doesn’t have an easy path to flow through. So the wire does heat up, and may eventually cause the fuse to melt or the MCB switch to drop, but there’s no short circuiting, right? So, assuming the aforementioned definition of short circuiting to be true, and the wire to be isolated (so there’s no chance of any insulation melting and contact with another such wire), am I correct?
In a tungsten filament light bulb, the temperature and resistance rise rapidly until the power being radiated away equals that being supplied. Normally this occurs at a temperature that is below the melting point of tungsten.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why are the positive charged particles moving the the right direction? The following is the description for this figure provided by my textbook: The paths of different types of radiation in a magnetic field. Using the right-hand slap rule, we see that positively charged particles are forced to the right. [...] Why are the positively charged particles going to the right? I think there isn't enough information. Based on the figure, one can only deduces that magnetic field is going out from the screen or page. It still isn't to me that why the positive charges move to the right? I do know that whatever the direction in which the positive charges move, the electrons will move directly opposite. How can I figure out where is the direction of the lorentz force? Subsequently, how can I figure out the direction of the individual charged particle?
Well, the charges seem to come from the bottom of the page, going up: using the right-hand rule you place the velocity vector on the palm of your hand and close your hand towards the direction of the field, here directed upwards. And there it is, the force $\mathbf{F}\propto q(\mathbf{v}\times\mathbf{B})$ is directed as your thumb, to the right for positive charges
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How do we determine the color of heated glass? If a glass of certain colour is heated, then how can we determine the corresponding colour that it will glow with? For eg: I saw a question that asked "A blue glass when heated will glow with which colour?" and the answer was stated as "yellow" because "Blue glass appears blue at ordinary temperature as it absorbs all other colours. When it is heated, it emits white radiation deficient of blue colour, i.e., yellow coloured radiation." How was this obtained? How is white light deficient of blue light yellow? How can we predict the same for a different case?
The spectral emissivity, $e_{\lambda}$, of a surface is the power it emits in a narrow band of wavelengths centred on $\lambda$, expressed as a fraction of the power emitted by a black body of the same area in the same band. The spectral absorptivity, $a_{\lambda}$, of a surface is the fraction of the incident radiant power it absorbs in a narrow band of wavelengths centred on $\lambda$. Kirchhoff's law of radiation states that for any surface at any one temperature, $e_{\lambda}=a_{\lambda}$. For blue glass $a_{ \approx 470\ \text {nm}}$ is low compared to $a_{\text{other}\lambda s}$ so $e_{\text{other}\lambda s}$ is high compared with $e_{ \approx 470\ \text {nm}}$. In other words the glass emits other wavelengths much better than blue light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion in definition of emf The emf of a cell is defined as the work done per unit positive charge in taking it around the complete circuit of the cell (i.e. in the wire outside the cell and the electrolyte within the cell). But Kirchoff's Second Rule states that the work done in moving a charge around a closed loop is zero. How then do we get a nonzero value of the emf?
The emf of the cell is the open circuit voltage at the terminals of the battery (i.e., voltage with no current delivered by the battery). But the cell has internal resistance, $r_b$. So if a wire of theoretically zero resistance were placed across the terminals of the battery the current $I$ that flows results in all the emf dropping across the internal resistance, satisfying Kirchoff's voltage law, i.e., $$emf-Ir_{b}=0$$ See diagrams below Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Would a moving elementary particle follow the Heisenberg's Uncertainty Principle with respect to itself? An observer at rest or in motion different from the particle cannot determine its momentum and position to great accuracy at the same time. But what if the observer is on the particle itself or moving with the same velocity as the particle?
An electron feels its own field, which gives an infinity that has to be accounted for by renormalization. From this it is clear that the uncertainty relation does not apply to self interaction in QED.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Anti-gravity wheel? So I was just watching some YouTube videos on a spinning wheel that seemed to "defy" gravity. The creator made two videos on it, the first showing the wheel, and the second with an attempt to explain it. There are the names of the videos: "Anti-Gravity Wheel?" by Veritasium "Anti-Gravity Wheel Explained" by Veritasium It is the second video in particular that bothers me. It seems as if he can lift the wheel up over his head while it's spinning and precessing with ease, but when it's not spinning nor precessing, then he struggles to lift the weight over his head. He explains this as somehow the wheel "lifting itself" up as he forces it to precess faster than it's natural precession. But I don't really understand this explanation. To me, it seems as if it violates Newton's second law, which states that the total external force on a system (the wheel in this case) is equal to the total mass times the acceleration of the center of mass. Now, the center of mass of the wheel clearly goes upwards, meaning that an external force has to lift it. Therefore, the wheel can't lift "itself", for that is no external force, just internal forces. I thought the phenomenon happened because of the Magnus force (an external force). That would also explain why his total weight didn't change much as the wheel got lifted up. But that is clearly not the explanation given in the video. So what is the right explanation for this phenomenon?
This doesn't violate Newton's second law. Lifting the wheel over his head by holding it on the end of a long handle is hard because of torque (imagine lifting a broom holding it in the middle of the handle compared to just at the end of the handle). If he had been holding the wheel-handle arrangement at its center of mass, he would have just as easy a time lifting it over his head as in the video! The effect in the video is just due to precession; atmospheric effects do not matter here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Single Planck $h$ constants Planck developed his black body radiation theory assuming that atoms treated as simple harmonic oscillators can stay in states of very much defined energy. If normal frequency of such oscillator is $\nu$, then the energy levels are the multiples of $h \nu$ (that is $E_n = n h \nu$, forgetting about zero-point vibrations). From my understanding, here $h$ serves just a proportionality constant. Later, Einstein stated that light can exist in quanta (photons). For each electromagnetic wave of frequency $\nu$ the minimal energy is again $h \nu$. He then very successfully explained the photoelectric effect with this approach. Here, again, $h$ is a proportionality constant. My question is that why in these two cases $h$ is (or should be?) the same constant? What is the relation between these two $h$'s in two approaches. Why did this evolve this way? I mean from black body radiation experiments and later photoelectric effect measurements one can derive Planck constants, and see they are indeed the same (within some uncertainty). But this does not solve my problem of these $h$'s being assumed to be the same. I clearly miss some link between these ideas. Many thanks for those who can explain these in detail or point to relevant literature on the topic.
There are three pillars of experiments that forced quantum mechanics at first as a phenomenological theory and then as a more formal theory of physics with principles and postulates and differential equations. * *atomic spectra *black body radiation *the photoelectric effect Bohr's atom tied up the observations by assuming quantized energy levels for the atoms, using h explicitly in the arbitrarily imposed quantization of angular momentum that allowed for stable energy levels. (See this answer of mine). Then Schrodinger's equation introduced the wave equations and after that the theory of quantum mechanics took off. So even though new students are introduced to the theory, the development of the theory was laborious, and strongly dependent on fitting observations and measurements. The single constant was forced by the data.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Motion of ball (air viscosity concerned) Suppose a ball of mass $m$ is thrown vertically upwards from the ground. I understand that the speed-time graph would be somewhat like a distorted parabola. But what about the velocity- time graph (considering air drag or viscosity)? According to me it would attain a kind of terminal velocity while falling down. But I am unable to interpret it mathematically. And sometimes you really need mathematical intuition to see what is happening. So can anyone make a brief mathematical interpretation of this? Thank you.
Let, the viscous force drag, $${F}={k}{v}$$ where ${k}$ is a constant and ${v}$ is the velocity at any instant. While moving up (upward acceleration is negative), $${ma} = {mg} + {kv}$$ $${a}={g} + \frac{kv}{m}$$ While moving down (downward acceleration is positive), $${ma}={mg}-{kv}$$ $${a}={g} - \frac{kv}{m}$$ From any of the two equations, it is clear that $$-\frac {dv}{dt} \propto {v}$$ $$\frac {dv}{v} \propto {-dt}$$ $$\int \frac {dv}{v} = {n}\int {-dt}$$ Where $n$ is a constant $${\ln v} = {-nt + c}$$ $$e^{\ln v}= e^{-nt+c}$$ $${v} = e^{-nt+c}$$ Thus the graph will have an asymptote which represents the terminal velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is curved space able to change an object's velocity (vector)? I don't really understand what is meant by curved space. Why does mass warp space? Why does curved space alter the velocity of a massive object? Normally to change an object's direction you have to apply some force to overcome inertia. So how does curved space do it? What is space anyway? Layman's terms, please.
If you notice in GR, it always mentions space time is curved, it does not mention space is curved. When a heavy object curves space time, not only space component is affected, the time component is also affected. We all travel through time, even an absolute stationary object travels/ is moving in time domain. When a object enters the curvature, the objects flat time domain enters to streched time domain, for an outside observer (outside curvature) this change appears as beginning of motion( like a slip on slippery surface or a rate of change -dt). As you move closer to the heavy object the more space time is curved hence the more rate of change you see.. hence the illusion of acceleration due to gravity appears. Let's consider this example, an object is moving 1 meter per second out side a curvature. Now the object enters a curvature where the space time is stretched to 2meter at the starting of curvature then 3 meter and then 4 meter and so on... please note that the stretched 2,3,4 meter are equivalent to 1 meter outside curvature, now along with the space, time is also stretched, ie at 2 meter is 2 seconds, at 3 meter, it is 3 seconds so on etc... Hence the 1m/s moving object after entering curvature appears to accelerate for an outside observer, hence the illusion of falling. We actually need to apply force to stop this falling object to the center of curvature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Can a single-slit experiment demonstrate the particle nature of light? Young's two-slit experiment is generally credited for demonstrating the wave nature of light. But what about a similar experiment with just one slit? My understanding is that this will create an interference pattern. Shouldn't that be enough to demonstrate light's wave nature? Perhaps the technology available at the time wasn't good enough to create interference, or perhaps there's a plausible wave explanation?
There is no experiment that sends light through one slit or multiple slits where a particle pattern, (one line on the screen) is the result. The particle nature is only revealed when individual photons are sent through the slit or slits, but even in single photon experiments the wave nature is still present. Evidence of the particle nature is evident when individual impacts are recorded on the screen, yet they accumulate to create an interference pattern. The particle nature is never isolated in a slit experiment. The wave nature is alway present.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
If the photon is the force-carrier for the electromagnetic force, how does the electric charge of a black hole escape the event horizon? When people speak of the electric charge of a black hole, do we actually mean it affects things outside of the event-horizon or is it just a property it has?
I'll answer the question in the body of your post and not in the original subject line (on force carriers) which is a different subject. . from our reference frame safely outside the black hole, all the objects that fall into it never make it through the event horizon: they appear to get stuck there in a vanishingly thin layer just outside the EH. This includes electrical charges, which to us appear to reside just outside the EH and radiate their field lines outward into space just as if the black hole itself (inside the EH) were a point charge in space. So any net electric field that a black hole may possess is simply the sum of all the charge that fell onto its event horizon over its lifetime, plus whatever charge it originally had before it collapsed into a black hole. Those charges radiate their field into space and would be detectable to us in the same way any other charged object would be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Orbit around irregularly shaped asteroids I'm curious how would one calculate the shape of the orbit around irregular objects (let's call them asteroids). How do you tackle this problem? How do you write down basic equations? In classical mechanics we only mention very simplified problems and the force is always central. But what happens if the asteroid is not a sphere but only half of the sphere or quarter? My understanding is that the force now isn't central force and that you can only approximately calculate gravitational potential.
The gravitational potential of an object is $$\varphi(\mathbf{r})=-G\int\frac{\rho(\mathbf{r}’)d^3\mathbf{r}’}{|\mathbf{r}-\mathbf{r}’|}$$ where $\rho$ is its mass density. (This is just summing $-Gdm/r$ for each bit of mass $dm$ in the object.) You may not be able to evaluate this integral analytically, but you can always evaluate it numerically to whatever precision you require. You can also use techniques like multipole expansion to express it as an infinite series in inverse powers of $r$. You keep as many terms as you need.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What kind of matrix transformations are allowed in general relativity? In special relativity, one can transform a 4-vector as follows: $$ x'=\Lambda x $$ Of course in this case, $\Lambda$ cannot be an arbitrary $4\times 4$ matrix of $\mathbb{M}(4,\mathbb{C})$. For instance, it must invertible. I believe, technically, it must be an element spawned by the basis representation of $O(3,1)$...? For general relativity, if I am to express a transformation $x'=Gx$ where $G$ is a $4\times 4$ matrix, what are the restrictions on $G$ such that one can claim the transformation is consistent with general relativity. Is it the case that since general relativity is not a group, then it follows that any $G$ (or almost any $G$) is permitted? edit: Furthermore, it seems that since GR is non-linear, and the general linear group is the most general matrix group, then it follows that $G$ must be an element of $\mathbb{M}(4,\mathbb{C})$.
GR is invariant under diffeomorphisms, i.e. (up to subtleties) any smooth change of coordinates $$x^\mu \to x'^\mu = x'^\mu(x)$$ This can be non-linear and it definitely includes the Minkowski group as a subgroup.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Direction of frictional forces on front and back wheels Why are the directions of frictional forces on the front and rear wheels of a moving car in the opposite direction, when the only the front wheels are accelerated (or only the back wheels)? When the car accelerates, the direction of the static friction exerted by the front wheels on the surface is directed backward. But what about the wheels on the back of the car?
The simplest way to figure out what friction is doing is to see what happens when you turn friction off. Assume a car on frictionless road.With no friction at all and the car stopped, pushing down on the accelerator makes the rear wheels spin clockwise. They spin on the frictionless surface, the front wheels do nothing, and the car goes nowhere. Friction on the rear wheels opposes the spinning, so it must point in the direction the car wants to go. For the rear wheels to roll without slipping, the friction must be static. If we turn on friction to the rear wheels only, the car accelerates forward with the front wheels dragging along the road without spinning. Friction opposes this motion, so it must point opposite to the way the car is going. Again, it must be static friction as tyres roll on roads. The friction is in opposite directions on front and rear tyres this means torque output from the rear wheels must be greater than certain minimum value for car to move .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Why does the spring constant not depend on the mass of the object attached? It is said that: $$ F = -m\omega^2 x = -kx, $$ so $k=m\omega^2$. Since $k$ is the spring constant it doesn't depend on the mass of the object attached to it, but here $m$ signifies the mass of the object. Then how is $k$ independent of the mass attached?
The unit of $k$ is $\frac N m=\frac{kgm}{{sec}^2}/m=\frac{kg}{{sec}^2}$, so $kx$ has the unit of a force, which is explicitly stated in Hooke's law, $F=ma=kx$. This means you can divide the mass out on both sides of the equation. So Hooke's law doesn't depend on the mass attached to a string. For more information, see this Wikipedia article.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why do electrons flow in the opposite direction to current? I'm 15 and just had a question about physics and electric fields. I've read that electrons flow in the opposite direction to current. Isn't current the flow of negative charge and therefore the flow of electrons? Or are they referring to conventional current?
Looking from a layman perspective: A flow/current is created only when there is a potential difference. By definition/convention a flow or current always flows from high potential to low potential. E.g. Water flows downwards, air flows from high pressure area to low pressure area and likewise. In electrical world, this translates to positive (high) and negative (low) voltage. Like other examples mentioned above, electrical current will flow only when there is a potential difference. Coming on to the flow of electron, by their very nature, the electron will tend to flow towards the +ve side because they have -ve charge, and hence they flow opposite to the conventional direction of current flow (from +ve to -ve).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is it possible to cool down air with warm water? Is there some condition in wich the water evaporation can cool down the air even if the water is hotter than the air?
Certainly; an evaporative cooler(https://en.wikipedia.org/wiki/Evaporative_cooler), also called a water cooler or swamp cooler, cools air by the process of evaporating water. As liquid water evaporates, it becomes much cooler. So even if the liquid water is warmer than the air, the vapor could still be cooler than the air.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What will happen if we try to take a voltage reading by keeping it in current mode in a multimeter? There are different modes present in a multimeter. one is the current mode and voltage mode for their respective measurements. what will happen if one try to take a voltage reading by keeping it in current mode?
An ammeter can be thought of as a galvanometer with a tiny shunt resistance in series (so as to not disturb the total current drawn from the source) and a voltmeter can be thought of as a galvanometer with a very high shunt resistance attached in parallel. Ammeters should thus be connected to a circuit in series in order to measure the resistance, as shown below:                                                        When you set the multimeter to the "current" mode, it is essentially working as an ammeter, meaning that it's a galvanometer with a very low resistance (say, $r$) in series. If you were to use it to measure voltage, you would connect this (in parallel) over some resistance $R$, as shown below.                                              The circuit's load across will now become approximately $r$ (specifically, it will be the equivalent resistance of $r$ and $R$ in parallel $= \frac{r R}{R+r}$, very close to $r$), meaning that the circuit will be equivalent to a power source connected to a very low resistance. As a result the circuit will draw a large amount of current from the source (in practical terms, it will be the maximum current the source can provide), and this will be the current measured by the multimeter. Of course, the value of $r$ chosen by the multimeter depends on the the actual "current" setting. In some multimeters, I've found that this usually blows the fuse if you're using the "sensitive" current setting. Of course, if you're using a source that can actually provide a current much larger than the multimeter can handle, Bad Things Will Happen. (But hopefully -- if you have to ask this question -- you won't be using such a source!)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is this a solution of Einstein's equations? Take infinite space. $\mathbb{R}^3$ Then cut a sphere (a 3-ball) out of it and discard it. You now have $\mathbb{R}^3\backslash B_3$. Now take each point on the surface of the hole and identify it with it's antipodal point on $S_2$. So it is like a self-wormhole. You now have a space with a topological defect in it which seems like it would persist eternally. I wonder if this topology is consistent with Einstein's equations of General Relativity? I suppose the question is, can there be such a solution that is Ricci-flat? Or can it exist in a universe with non-zero cosmological constant?
I think this space is a conical defect of order 2 at the center of $\mathbb{R}^3-\{0\}$. This means that any path traversing an angle $2\pi$ at some fixed radius has length $4\pi r$. This is akin to 2d polar coordinates $ds^2 = 4r^2 d\theta^2 + dr^2$. Indeed the analogous construction is $\mathbb{R}^2$ with the disk $B_2$ removed and the unit circle antipodally identified. The antipodal map is just $\theta \to \theta+\pi$ on the unit circle, and the result is the 2-sheeted radial coordinates whose metric I gave (with $r=1$ the origin of the geometry, and $r<1$ not part of the geometry). Note that the quotienting procedure does not affect the "bulk" of $\mathbb{R}^3$. Since the Einstein equation is local, a flat metric and vacuum can be chosen there. And at the origin we have $\delta$ function curvature and $\delta$ function matter sourcing this curvature (c.f. conical spacetimes / cosmic strings).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Interpreting the Negative Sign in Simple Harmonic Motion What I Know: $$ \vec F = -k \vec x $$ where the negative sign indicates the Force acts in the opposite direction to the displacement. If we were to take the integral so... $$\int_{x_i}^{x_f} Fdx = -\Delta U$$ What would the negative sign in this instance represent? From my understanding, we cannot produce negative energy...or can we? I have attached the image below for the context of my confusion. Thank you.
The -kx is the force exerted by the spring as it is stretched (or compressed) away from the equilibrium position. Your integral is the work done by the spring (which is negative when being stretched). To get the increase in energy being stored in the spring, you need the work done by an external force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does the Standard Model predict zero mass for all vector bosons? This video from 37:33 argues that the Standard Model predicts zero mass for all vector bosons as follows: * *Gauge bosons must have gauge invariance. *For a vector field $A$ define a transformation $\alpha(t,x,y,z)$ which acts on $A$ such that $A\rightarrow A + \partial\alpha$ *The effect on the mass term of the Lagrangian is *$m^2A^2 \rightarrow m^2(A+\partial\alpha)^2 = m^2A^2 + 2m^2A\partial\alpha + m^2(\partial\alpha)^2$ *Ignore $m^2(\partial\alpha)^2$ which is a kinetic energy term not contributing to mass. *For guage invariance the observables (mass) must be unchanged, hence $A=0$ (no particles), $\partial\alpha=0$ (contradicting the hypothesis), or $m=0$ *Hence, all vector bosons are massless. The issues I have with this argument are: * *No reason given why vector bosons must have gauge invariance in the first place. *The transformation $A \rightarrow A + \partial\alpha$ constrains mass to be zero but a different transformation on $A$ might not constrain mass. Please help me tighten this argument up. Why does the Standard Model predict zero mass for vector bosons?
The massive intermediate vector boson theory is not renormalizable. Also it is not gauge invariant. The Standard Model therefore starts out with massless gauge bosons and later adds the mass using the Higgs mechanism. Gerard 't Hooft then succeeded in renormalizing the theory, for which he obtained the Nobel prize.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closed form solution of the normal density of a superfluid for the Bogoliubov spectrum I've been trying to solve the following definite integral $$ \int_0^\infty dx\, x^4\, \frac{e^{\sqrt{x^4+2 x^2}/Tp}}{\left(e^{\sqrt{x^4+2 x^2}/Tp}-1\right)^2}\, ,\quad Tp = \frac{T}{Un} $$ This is the density of the normal part of a superfluid. However, so far I could not find any solution. I'd prefer an exact one but a good approximation would be also nice. (I do have the results for the limiting cases where one sets $\sqrt{x^4+2 x^2}\approx \sqrt{2}x$ and the case $\sqrt{x^4+2 x^2}\approx x^2 + 1$, so I am interested in the exact result or at least an approximation which is higher order than the limiting cases.) (@Alex Trounev :) This is a follow up question of Closed form solution to normal fluid density integral in the two fluid model)
Since the integral depends only one parameter $Tp$, a good approach is to evaluate it numerically, since all you need is the dependence on this parameter. One could also try to evaluate analytically limiting cases: $Tp \ll 1$ and $Tp \gg 1$. It is possible that by a clever substitution this integral can be reduced to one of the integrals in Gradshtein&Ryzhik, but there is a high chance that the solution is given in terms of special functions, which is often not better than just looking at the limiting cases. There is a fine difference between an exact result and a result that one can understand/interpret. Update I suggest using substitution $$y = \sqrt{x^4+2x^2}/(Tp) \Leftrightarrow x^2 = \sqrt{1+y^2(Tp)^2}-1$$, which reduces the integral to $$\int_0^{+\infty}dx x^4 \frac{e^{\sqrt{x^4+2x^2}/Tp}}{(e^{\sqrt{x^4+2x^2}/Tp}-1)^2} = \frac{(Tp)^2}{2}\int_0^{+\infty}dy \frac{y(\sqrt{1+y^2(Tp)^2}-1)^3}{\sqrt{1+y^2(Tp)^2}}\frac{e^y}{(e^y-1)^2} $$ One can now obtain the required limiting cases by expanding $\sqrt{1+y^2(Tp)^2}$ in powers of $(Tp)^2$ (for small $Tp$) and by expanding $Tp\sqrt{\frac{1}{(Tp)^2}+y^2}$ in powers of $1/(Tp)^2$ (for $Tp \gg 1$). It is also easier to see whether the integral is reducible to any of those is the integral tables.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can a photon increase its frequency by bouncing against a mirror moving toward it? As discussed in preceding questions, photons can lower their frequency after a reflection happens and radiation pressure sets the mirror in motion. I wonder if the opposite can happen, i.e. increasing a photon frequency by transfering momentum to it. And if so, does it have to do with the blue shift of Doppler effect?
Conserning the first question, yes. The reasoning is the same, as in that post. Write the conservation of energy and momentum: $$ \begin{align} k + m v &= k^{'} + mv^{'} \\ k + \frac{1}{2} m v^2 &= k^{'} + \frac{1}{2} m v^{'2} \end{align} $$ Where unprimed indices denote the initial momentum of the photon and velocity of the mirror, and primed - after the reflection. Now, part of the kinetic energy of the mirror is transferred to the photon. Concerning the relationship with the Doppler effect, also you may have such an interpretation. In the frame of reference, moving with the mirror, you now think of the photon, as being emitted from the source, moving towards the mirror with relative velocity $v$. In this frame, after reflection, momentum would be same in magnitude, opposite direction after reflection. Now return to the laboratory frame, and the mirror is a moving emitter. So the Doppler effect occurs twice from this point of view.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Use of a geometric series sum in derivation of Bose-Einstein distribution In the following Wiki derivation of the Bose-Einstein distribution, a geometric sum is used to make the following step $$ \sum_{n=0}^\infty\left (\exp \left (\frac{\mu -\epsilon}{k_B T}\right)\right)^n = \frac{1}{1-\exp\left(\frac{\mu -\epsilon}{k_B T}\right)} $$ but using a geometric series requires the absolute value of the argument to be less than 1. Can I have some assistance in understanding why $$ \exp \left(\frac{\mu -\epsilon}{k_B T}\right ) $$ is necessarily less than 1?
The chemical potential of a Boson gas is always negative or (at worst) zero. (See here for why.) Since the system is considered to be non-interacting (ideal), the Hamiltonian is just the kinetic energy. The lowest energy is thus the $\epsilon_0=0$ state. This means that in general $\epsilon\geq0$, and so $$\exp{\left(\frac{\mu-\epsilon}{k_B T}\right)}\leq\exp{\left(-\frac{\epsilon}{k_B T}\right)}\leq1.$$ (The only case when the sum diverges is when this term is equal to 1, and when this happens the occupation of the lowest energy level diverges, which is in fact what happens in Bose-Einstein condensation.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why should the particles meet at a common point? I saw a question in my physics book asking for the time when all the three particles (each at the corner of an equilateral triangle and each having constant velocity v along the sides of the triangle) meet at a common point. I can't find the reason why these particles should meet at a common point. What I think is that since they all have the same velocity and each travels the same distance so after some time ($ t = \frac{a}{v}$) ($a$ is the side of the triangle) their corners should be interchanged and this should continue all the time and they should never be at the same point. But it is not the answer and the solution shows that they met at the centroid of the triangle. Why should they follow a curved path? Shouldn't they just go on along the sides of the triangle?
The question says that the particles always point towards each other. This is a very famous question in India for Jee's preparation for kinematics. Here is the question: Three particles A, B, and C are situated at the vertices of an equilateral triangle ABC with sides d at t = 0. Each of the particles moving with constant speed always has its velocity along AB, B along BC and C along CA. At what time will the particles meet each other? With the associated diagram which is exactly what you have given. Here the triangle always refers to the traingle made by the particles as its vertices. So in this case obviously the particles cant move in a straight path. Another challenge that my teacher had proposed to me was to find the equation trajectory of the particles. Good Luck!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Can rockets fly without burning any fuel with the help of gases under extreme pressure only? Why is it necessary to burn the hydrogen fuel coming out of the engine for the lift of rockets? If it is done to create a greater reaction force on the rocket then why can't we get the same lift with just adjusting the speed of the hydrogen gas going out of the engine like we can release them at a great pressure (and also by adjusting the size of the nozzle opening) and thus at a greater speed? Is it possible for rockets to fly without burning the fuel and just releasing the fuel with a great force? (I know the rockets are too massive). How does the ISP of the ordinary rocket engines compare with the one in my question ? Most of the answers have done the comparison (and a great thanks for that), but help me with the numerical difference in the ISP's. (Compare it using any desired values of the amount of fuel and other required things for taking off.)
"Cold gas" thrusters (i.e. pressurized gas released through a nozzle without combustion) are used for attitude control on some rockets (notably on the Falcon 9 first-stage, for attitude control in the recovery phase), but they have a much lower specific impulse than hydrogen-oxygen combustion. Their advantage is their extreme simplicity in small systems. Cold hydrogen specific impulse: ~270 sec; hydrogen-oxygen combustion: ~440 sec. Nitrogen is more commonly used in cold-gas setups (easier to produce and store, more thrust per volume of tankage) but yields only about 70 sec. Increasing the pressure to get better performance requires more weight in tankage to contain the pressure, so you get a net performance loss.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 12, "answer_id": 5 }
Particle Hole Symmetry of BdG Hamiltonians It is straight-forward to verify that any Hermitian BdG Hamiltonian of the form $$ \mathcal{H} = (c_1^\dagger, c_1, c_2^\dagger, c_2,...) \begin{pmatrix} H_{11} & H_{12} & \cdots \\ H_{21} & H_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} c_1 \\ c_1^\dagger\\ c_2 \\ c_2^\dagger \\ \vdots \end{pmatrix} $$ with $2\times2$ blocks $H_{ij}$ satisfies the the particle-hole symmetry $\sigma^x H_{ij}^* \sigma^x = -H_{ij}$. This is for example als confirmed in this question or this answer. Because of the fermionic relations $\{c_i, c_j\} = \{c_i^\dagger, c_j^\dagger\}=0$ and $\{c_i, c_j^\dagger\}=\delta_{ij}$ the entries of the $2 \times 2$ blocks are not uniquely determined. Consider a $i \neq j$ term of the form $A c_i^\dagger c_j + B c_i c_j + h.c.$ with complex coefficients $A$ and $B$. Then we have $$ 2A c_i^\dagger c_j + h.c. = 2A c_i^\dagger c_j + 2A^\star c_j^\dagger c_i = A c_i^\dagger c_j + A^\star c_j^\dagger c_i - A c_j c_i^\dagger - A^\star c_i c_j^\dagger $$ and $$ 2B c_i c_j + h.c. = 2B c_i c_j + 2B^\star c_j^\dagger c_i^\dagger = B c_i c_j + B^\star c_j^\dagger c_i^\dagger - B c_j c_i - 2B^\star c_i^\dagger c_j^\dagger $$ and hence get $$ H_{ij} = \begin{bmatrix} A & -B^\star \\ B &-A^\star \end{bmatrix} $$ and $$ H_{ji} = \begin{bmatrix} A^\star & -B \\ B^\star & -A \end{bmatrix}. $$ The same is true for $H_{ii}$ where the relations $c_i^2 = c_i^{\dagger,2}$ imply that the off-diagonal entries are 0. Now one easily sees that we have the anti-commuting, anti-unitary symmetry $$ \sigma^x H_{ij}^\star \sigma^x = - H_{ij} $$ since conjugation with $\sigma^x$ is simply point-mirroring the matrix around the center. This implies that all superconductors have this PHS, since they are written with such Hamiltonians. Now my question is: What stops me from taking any single-particle Hamiltonian like $$\mathcal{H} = (c_1^\dagger, c_2^\dagger, ...) \begin{pmatrix} H_{11} & H_{12} & \cdots \\ H_{21} & H_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ \vdots \end{pmatrix} $$ with single matrix elements $H_{ij}$, write it in the first form with a BdG Hamiltonian (without any $c_i c_j$ or $c_i^\dagger c_j^\dagger$ terms) and say it also has above PHS? Wouldn't this defintion of PHS imply that all Hamiltonians of non-interacting fermions are particle-hole symmetric? edit: Added explanation why all hermitian BdG Hamiltonians are particle-hole symmetric.
Your first statement is a little ambiguous, let me rephrase it. Any Hermitian Hamiltonian of the form $$ \mathcal{H} = (c_1^\dagger, c_1, c_2^\dagger, c_2,...) \begin{pmatrix} H_{11} & H_{12} & \cdots \\ H_{21} & H_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} c_1 \\ c_1^\dagger\\ c_2 \\ c_2^\dagger \\ \vdots \end{pmatrix} $$ is particle-hole symmetric, and therefore represent a BdG Hamiltonian, if and only if $\sigma^x H_{ij}^* \sigma^x = -H_{ij}$. Analogously, your second Hamiltonian $$\mathcal{H} = (c_1^\dagger, c_2^\dagger, ...) \begin{pmatrix} H_{11} & H_{12} & \cdots \\ H_{21} & H_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ \vdots \end{pmatrix} $$ has particle-hole symmetry, and therefore can be thought as a BdG Hamiltonian, if and only if $\sigma^x H_{ij}^* \sigma^x = -H_{ij}$. Not all Hermitian Hamiltonians satisfy this condition. For example if you take one of the block to be $$ H_{11}=\begin{pmatrix} E & W \\ W^*& E' \end{pmatrix} $$ you do not have particle-hole symmetry in the general case $E\neq-E'$, but only if $E=-E'$. In summary, not all Hermitian Hamiltonians of non-interacting fermions are particle-hole symmetric. A simple counterexample is $$\mathcal{H} = (c_1^\dagger, c_2^\dagger, ...) \begin{pmatrix} H_{11} & H_{12} \\ H_{12}^\dagger & H_{22} \\ \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ \vdots \end{pmatrix} $$ where $$ H_{ii}=\begin{pmatrix} E_i & W_i \\ W_i^*& E_i' \end{pmatrix} $$ with $E'_i\neq-E_i$. Same argument apply to bosonic Hamiltonians, and to interacting Hamiltonians (in this case the Hamiltonian has a little more complicated form).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Acceleration transformation in special relativity I am having a hard time understanding the transformation of acceleration when it is not parallel to the instantaneous displacement of the particle, in particular the its dimension. Suppose a particle is in projectile motion. Acceleration is downward because of gravity but I understand "uniform acceleration" depends on frame so we just note it goes downward. Let's transform the acceleration in the stationary frame to the instantaneous frame at the particle. I would expect the transformed acceleration would also point downward, but according to the transformation given in wiki, the direction of the resulting acceleration vector is a combination of the acceleration vector of the stationary frame and instantaneous velocity vector, which does not necessarily mean it accelerates downward. Why does this happen and if the equations are correct, where is the source of the acceleration in the horizontal direction?
Vertical motion of the projectile means that energy is lost or gained by the projectile. Horizontal motion of the projectile means that the the projectile moves relative to the aforementioned energy. The projectile receives a horizontal impulse when it absorbs some energy that has some horizontal momentum. This is a much better scenario, because gravity is not involved: A vertical rocket is in horizontal motion, rocket motors are off. When rocket motors are turned on, the horizontal motion of the rocket is unchanged, but the horizontal motion of an astronaut that roller skates horizontally inside the rocket changes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Plugging a global phase into an operator Cheers to everyone. I' ve got a serious doubt about the following: consider the annihilation operator $\hat a$. For practical reasons, I sometimes find useful redefining it in the following way : $\hat a' =\hat a e^{i \phi}$, with $\phi \in \mathbb R$. If I add a new global phase to each eigenstate of $\hat a^\dagger \hat a$, $| 1 \rangle \rightarrow | 1 \rangle e^{i \phi}, \quad | 2 \rangle \rightarrow | 2 \rangle e^{2 i \phi} \,\dots$, I have a new annihilation operator $\hat a'$ and a new equivalent Hilbert space. Is this $\hat a'$ physically reliable? Consider the time evolution of a state with Hamiltonian $\mathcal H = \alpha \hat a + \alpha^* \hat a^\dagger$, with $\alpha \in \mathbb C$. With the transformation described above $\alpha$ can be considered to be real without loss of generality. Is this correct?
$$\mathcal H = \alpha \hat a + \alpha^* \hat a^\dagger\\ =\alpha e^{-i\phi}~\hat a' + (\alpha e^{-i\phi})^* \hat {a'}~^\dagger\\ \equiv \alpha' \hat a' + (\alpha ')^* \hat {a'}~^\dagger.$$ Names, by themselves, cannot affect physical relevance. Arbitrary complex number coefficients present differently in the unprimed and primed representations, which amounts to a complex rotation. For given coefficients, fixed, there is a complex rotation to make them real. What's your point?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is volume an extensive property but molar volume an intensive one? If we take volume of a system, then it's often defined as volume of container and is an extensive property but it's often said that molar volume is an intensive property. How exactly does dividing the volume by number of moles turn an extensive property into an intensive one? reference: 21:36 of this of this video
The number of moles is proportional to the number of atoms/molecules in the system. Suppose you bring another identical copy of the system, and consider the two copies as a whole. The number of atoms/molecules will double, so will the number of moles. For usual solids and liquids, the volume will double (assuming external conditions like pressure, temperature remains the same). For gases, the volume does not depend on the mass (it will consume the whole volume of the container). However, if another identical container (having the same mass of gas, and same $P$, $T$) is brought, the volume of the whole system will double. So, both these are extensive properties. However, the ratio $\frac{\text{volume}}{\text{no of moles}}$ will remain the same. So it is an intensive property.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
On the meaning of $dU = \delta w$ for adiabatic processes For an adiabatic transformation between state A and B $\delta q = 0$ and consequently from the first law of thermodynamics $dU = \delta w$, since $U$ is a state function its variation should be the same whether the process is reversible or irreversible. The possibility to go via an irreversible or irreversible path between the two states seems feasible by what I'm reading on "Chemical Thermodynamics: Classical, Statistical and Irreversible By J. Rajaram page 66" For the adiabatic expansion of an ideal gas the work done by the gas is equal to the decrease in internal energy, -ΔU = —w. However, if an ideal gas is taken from state A to state B by a reversible path as well as an irreversible path, while the change in internal energy is the same because the initial and final states are the same, the work done against external pressure will not be the same. The work done in the irreversible process must be less than that done in the reversible process. The decrease in internal energy in either case will be ΔU =n C_v(T_2 - T_1). $dU = \Delta w$ seems to indicate that also work to be the same for the reversible and irreversible path. But how can work for the irreversible and reversible process be the same? We all know that the maximum work can be extracted by the reversible path. Hence even if $\Delta U$ is equal for the two processes $w$ should not. So Does the equal sign in $dU = \delta w$, hold only for reversible processes? if Yes why? If No, how should I read $dU = \delta w$?
You are assuming that you can take either a reversible or irreversible adiabatic path and end up in the same final state. On the irreversible you will generate entropy and since the path is adiabatic you cannot pass that entropy to the surroundings as heat, so the final state of the irreversible path will have a higher entropy but the same energy. We all know that the maximum work can be extracted by the reversible path This is true of cycles rather than paths. Making an analogous statement about paths is difficult for exactly the same reason as happened here; it is difficult to define a "irreversible version" of a path in a general way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How to derive Euler-Lagrange equation for isochronous curve of Leibniz in terms of $t$, $x(t)$, $\dot{x}(t)$? According to this source, "An isochronous curve of Leibniz is a curve such that if a particle comes down along it by the pull of gravity, the vertical component of the speed is constant, when the gravitational field is supposed to be uniform." Suppose the curve is given by $(x(t),y(t))$. I am attempting to for solve the function $x(t)$ using Lagrangian methods as follows. The vertical component of speed is constant, $\dot{y}(t)=v_y$. Assuming mass is $m=1$, our kinetic energy is $K=\frac{1}{2}(\dot{x}^2+v_y^2)$ and our potential energy is $U=gv_y t$. Then we can formulate a Lagrangian, $$L(t,x(t),\dot{x}(t))=\frac{1}{2}(\dot{x}^2+v_y^2)-gv_y t$$ and we can calculate that $\frac{\partial L}{\partial x}=0$ and $\frac{\partial L}{\partial \dot{x}} = \dot{x}$ so the Euler-Lagrange equation implies $$\frac{d}{dt}\dot{x}(t)=0.$$ But this is obviously incorrect. The correct answer is actually $$x(t) = \frac{2}{3}\sqrt{gv_yt^3}.$$ I also know (from using the Newtonian method) that the solution arises from the fact that $\dot{x}=\frac{gv_y}{\ddot{x}}$, and I suspect the Lagrangian should result in something similar. Where have I erred in this methodology? Is it salvageable?
according to your source : $$\dot{y}^2=2\,g\,x$$ thus: $$T=\frac 12 m\,(\dot{x}^2+2\,g\,x)$$ $$U=g\,x$$ with E.L: you obtain $$\ddot{x}=0$$ $\Rightarrow$ $$x(t)=v_0\,t$$ $$\dot{y}=\sqrt{2\,g\,v_0\,t}~,y(t)=\frac 23\,{t}^{\frac 32}\sqrt {2}\sqrt {g}\sqrt {{v_0}}$$ so you get the same result that given in your source $$y^2={\frac {8}{9}}\,{\frac {g{x}^{3}}{{{v_0}}^{2}}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does gravity act at the centre? Why does gravity act at the centre of earth and how does that happen?
Every single particle in Earth exerts gravitational attraction to an object in your example, and, all these effects add up/cancel, and the net of these is pointing towards the center. Actually, at the center, all these effects cancel out, and you feel weightless. Correct. If you split the earth up into spherical shells, then the gravity from the shells "above" you cancels out, and you only feel the shells "below" you. When you are in the middle there is nothing "below" you. Would you be weightless at the center of the Earth? This can be shown at the level of the Christoffel symbols and the Radial four acceleration, where the vector would always point towards the center of Earth, and its magnitude would only be zero at the center of Earth exactly. When $r = 0$ the Christoffel symbol $\Gamma_{tt}^r$ is zero and that means the radial four-acceleration is zero and that means you're weightless. What is the general relativity explanation for why objects at the center of the Earth are weightless? So the answer to your question is that for Earth, the constituents exert gravitational pull from all directions, and the net of these points towards the center.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Peaks in Co-60 gamma spectrum The following plot shows data collected from a Co-60 coincidence experiment. The detectors used were NaI(T) scintillation detectors. One detector was gated around the 1.33 MeV peak and the second detector collected the data shown below. I have been trying to figure out what the two peaks are around 200keV.
You have already identified the peak at about 195 keV as backscatter peak. Apparently, you have used a NaI(Tl) detector. Photoelectric absorption by iodine of NaI results in a characteristic x-ray with 28 keV. If this x-ray exits the detector crystal, it results in a secondary peak 28 keV below the corresponding photopeak. Since 195 keV minus 28 keV is 167 keV, we may conclude that your peak at 167 keV is the iodine x-ray escape peak corresponding to the backscatter peak. You cannot see the iodine x-ray escape peak corresponding to the photopeak at 1.17323 MeV because it is hidden in the spread of the photopeak.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the centripetal force when instead of a mass point we have a physical rotating body? I was wondering what is the centripetal force of a body rotating in a circular motion. I know that the centripetal force of a point mass is $mv^2/r$. I only have done an introductory physics class so I can not find the answer.
I was wondering what is the centripetal force of a body rotating in a circular motion. It does not only apply to point masses. You can apply it to the center of mass of a rotating body. Refer to the figure below of a figure skating pair. The woman skater is moving in a circular path around the male skater. The center of rotation ($P$) of the male skater is shown. The man in this case acts like the centripetal Force. He exerts an inward force towards him which keeps the woman moving in a circle about him. In the non inertial reference frame of the rotating man, the woman acts like the centrifugal force exerting a force on the man, attempting to pull him away from his placement (center of rotation) towards her. The centrifugal force is a pseudo force required only in the non inertial reference frame and is the force she exerts on the man is due to her inertia (she would just go straight if there wasn't a centripetal force acting on her per Newton's first law). For the purpose of applying the centripetal force equation $F=mv^{2}/R$ we can consider the figure skating pair to consist of a rigid body the center of mass being $M$ and the radius of the rotation is $R$ shown in the figure. The centripetal acceleration is then $v^{2}/R$. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Do electrons experience wind resistance? Electrons have a mass, as a particle with mass, they experience most effects of objects with a mass. So do they experience any sort of wind resistance? Or is that simply explained by their cross section interaction probability with a given particle?
If one imagines a macroscopic object moving through air, one can very well approximate air resistance as the action of a continuous fluid. At low speeds, this situation can be described by viscous friction, which is a force proportional to the speed of the object, and with direction opposite to its direction of motion (I neglect here the effect of turbulence). If you have electrons moving in a medium, for example, cosmic rays moving through air, or electrons moving in a solid or in a liquid due to the effect of an electric potential, the above approximations cannot be valid. One cannot approximate the action of the medium on the electron as the action of a continuous entity. From the point of view of an electron, air, fluids, or a solid, are made of individual atoms. Therefore, one has to consider the statistical average of all possible collisions (scattering) of the electron with atoms. The situation is better described as individual collisions, Brown motion in fluids, or Drude theory of solids, depending on the context. All these approaches consider the statistical average of the medium on the electron trajectory. In practice, the electron will move freely for an average distance $l$, which is called the mean free path. Surprisingly, or maybe not, in the Drude theory of electron in solids, the average effect of collisions is proportional to the electron velocity, analogously to the case of viscous friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Why do electric field lines curve at the edges of a uniform electric field? I see a lot of images, including one in my textbook, like this one, where at the ends of a uniform field, field lines curve. However, I know that field lines are perpendicular to the surface. The only case I see them curving is when drawing field lines to connect two points which aren't collinear (like with charged sphere or opposite charges) and each point of the rod is collinear to its opposite pair, so why are they curved here?
This is one of those questions where you just have to see it. Here is fieldline drawing of two charges. Red is a positive charge and blue is negative. Now for 6 charges: and finally for 40 charges: Here is the Mathematica code for anyone interested range = 1.4; nCharges = 20; xSeparation = .5; e[r_, r0_] := (r - r0)/Norm[r - r0]^3 chargeY[n_] := If[nCharges == 1, 0, (n - 1)/(nCharges - 1) - .5]; Show[ StreamPlot[ Sum[e[{x, y}, {-xSeparation, chargeY[n]}], {n, 1, nCharges}] - Sum[e[{x, y}, {xSeparation, chargeY[n]}], {n, 1, nCharges}], {x, -range, range}, {y, -range, range}], ListPlot[Table[{-xSeparation, chargeY[n]}, {n, 1, nCharges}], PlotStyle -> {Red, PointSize[.03]}], ListPlot[Table[{xSeparation, chargeY[n]}, {n, 1, nCharges}], PlotStyle -> {Blue, PointSize[.03]}] ]
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 3 }
Can we Predict the Trajectory of a hypersonic missile? I read in a newspaper that we can't predict the trajectory of a hypersonic missile and that this property renders the missile undetectable. However, what I could not understand is why can't we predict it's trajectory? What factors do we have to look at for predicting the trajectory of such high speed missiles? Is this feature associated with its speed? I know that there would be forces like the thrust from propulsion, gravity, and the drag force. Is there anything else affecting the trajectory?
Of course the missile's trajectory is predictable if you know everything about its thrust, position, and momentum at all times since its launch. The problem is that someone who launches a hypersonic missile generally does not inform the opposing missile defense systems, "Hey, I launched a missile at this time and place, and here is its full thrust profile and flight plan!" Generally, that is information that someone launching a missile does not want their target to know. So these parameters must be measured by the missile defense system. The problem with hypersonic missiles is that these measurements are hard to make, for three main reasons: * *Unlike an ICBM, which travels a roughly parabolic trajectory, a hypersonic missile can change course mid-flight. This means that information that was valid a few seconds ago may no longer be valid now. *Unlike an ICBM, which goes briefly into space, a hypersonic missile spends its time flying at low altitude, below the radar horizon, so accurate measurements aren't possible until it's much closer. *Unlike an ICBM, which generally spends some time in a near vacuum where it's the only radar-reflecting object around, a hypersonic missile heats the surrounding atmosphere into plasma. Plasma absorbs radio rather than reflecting it, which makes it much harder to bounce a radar pulse off of it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does doubling density (keeping average gas molecule speed the same), increase temperature recorded on a thermometer? At the end of the day what the thermometer is measuring as temperature is energy of the air molecules (which could come in the form of kinetic energy). Now, imagine the following scenario : * *Take a box with just one gas molecule (at speed x). It goes and hits the mercury of the thermometer. Thermometer will probably not record its proper temperature. *Now fill the box with a million molecules (at the same speed x), and the thermometer records right some temperature. *Now double the density (but keep the speed of individual air molecules the same). Fill the box with 2 million molecules, but keep the average speed of the molecules the same. Will it record the same temperature? My personal intuition is that the temperature recorded should increase, since more molecules are giving energy to that thermometer in the same amount of time.
Assuming an ideal gas, if you are keeping the average speed of the molecules the same then you are holding the temperature of the gas constant. Assuming the container keeps the same volume, by the ideal gas law it must be that the pressure of the gas increases as you add more molecules. Yes there are more molecules hitting the thermometer, but that also means there are more molecules hitting the thermometer. What I mean by this is that energy can be transferred to the thermometer at a higher rate due to collisions, but collisions at the same increased rate will then transfer that energy from the thermometer back to the gas (and the same vice versa). This is how thermal equilibrium works. So no, just because you have more gas at the same temperature doesn't mean you will record a higher temperature. More gas will just mean fewer fluctuations about the same temperature. In addressing points made in the comments, technically the temperature of the thermometer will be somewhere between the starting temperature of the thermometer and the starting temperature of the gas, and the final temperature of the thermometer will approach the starting temperature of the gas as more gas is let in. However, there will come a point where additional gas will make the final temperature indistinguishable from the initial gas temperature, and ideally the thermometer shouldn't influence the temperature of the gas.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 1 }
Can we represent 4D graphically? Actually I know that axes are always perpendicular but after three axes we cannot draw any other axis that is perpendicular to all the other three axes. can any one say how can we draw another axes which is mutually perpendicular
Truth is if you want 4 dimensions that are orthogonal, they do not even need to be spatial. For example you can use color to add an extra dimension. Or another example, you can use time. There are many dimensions that we can see. You can even make extra spatial dimensions using local dimensions techniques. For example if you draw a grid of clocks you can have 2 usual spatial coordinates (column and row number), plus 2 or even 3 angular coordinates (each clock hour, minute, second hands). This kind of creativity is used in visualization techniques.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is there anything such as gravitational field-lines in GR similar to the electric/magnetic field lines in electromagnetism? I sometimes mistake space-time curvature for gravitational field lines. Do geodesics in some ways represent $g$-field lines? Why is not it traditional to show $g$-field lines around a massive object in general relativity the same as we do for $E$ or $B$ field lines around an electrical charge or a magnet in electromagnetics?
Probably not just as you ask, but there are interesting ideas for visulazations in Visualizing spacetime curvature via frame-drag vortexes and tidal tendexes. II. Stationary black holes by David A. Nichols, Robert Owen, Fan Zhang, Aaron Zimmerman, Jeandrew Brink, Yanbei Chen, Jeffrey D. Kaplan, Geoffrey Lovelace, Keith D. Matthews, Mark A. Scheel, Kip S. Thorne.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Will a plastic feel less heavy when I put it in a bucket of water and carry it? If I'm carrying a bucket of water in one hand and a piece of plastic in the other, and then I decide to keep the plastic in the bucket of water (it floats). Will it feel less heavy in the second case? I think it will feel the same because it's mass adds up to the bucket's mass and will be pulled by gravity with the same extent. But somehow I can't get my mind off from the fact that it's weight is already balanced by the up-thrust. Is there a simple way to explain how this works? It would be clearer if you helped me with some free body diagrams or an analogy or something simple.
As Bob D says, it will weigh the same unless some of the water spills out. But it's likely to feel somewhat heavier because you do better with weights balanced in both hands than all the weight in one hand. And if you carry one bucket with both hands that will be awkward, unless it's an unusually small bucket.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 4 }
Deriving ideal gas law from Boyle and Charles My textbook states Notice that since $PV = \text{constant}$ and $\frac{V}{T} = \text{constant}$ for a given quantity of gas, then $\frac{PV}{T}$ should also be a constant. I tried to prove this, but no success: $$PV = a$$ $$\frac{V}{T} = b$$   $$\frac{PV^2}{T} = ab$$ $$PT = \frac{a}{b}$$ But I am not able to cook up $\frac{PV}{T}$... Any help?
You can't derive it like that because the proportionality relations hold only when the third parameter is kept constant. However, you can derive the ideal gas law by noting that for high temperature, we get a limit as shown below: $$ \lim_{ p \to 0 } p \overline{V} = f(T)$$ So, the limit of the product as pressure drops to zero is a unique function $ f(T)$ for all gases independent of the substance used. We can use this to define the linear kelvin scale. Using the triple point of water and absolute zero as our reference, $$ f(T) = \frac{f(T_{trip-point})}{273.16K} T$$ Where $f(T_{trip-point})$ is the value of the limit at the triple point, using this and our first equation, we can write, $$ \lim_{ p \to 0} p \overline{V} = \frac{f(T_{trip-point})}{273.16K} T$$ and now, the universal gas constant is defined as follows: $$ R = \frac{f(T_{trip-point})}{273.16K}$$ Which leads us to: $$ \lim_{ p \to 0} p \overline{V} = RT$$ Now, we call an ideal gas is one which obeys the above relation even when the limit is not there. $$ p \overline{V} = RT$$ Reference: from 10:46 of this video
{ "language": "en", "url": "https://physics.stackexchange.com/questions/579140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Are there multiple Rydberg constants? I'm sorry if this is a trivial question, I'm trying to understand the Rydberg formula and unsure if there are different values for the Rydberg constant? According to Wikipedia's articles about Hydrogen spectral series, Rydberg formula and Rydberg constant, there are two different Rydberg constants: * *$R_{\infty} = 1.09737 \times 10^7 m^{-1}$ , for heavy metals *$R_{H} = 1.09678 \times 10^7 m^{-1}$ , for hydrogen Unfortunately, many other sites like Brilliant and CODATA treated Rydberg constant as a single value: * *$R = R_{\infty} = 1.09737 \times 10^7 m^{-1}$ Confusingly, my textbook also treated Rydberg constant as a single value, but says: * *$R_{H} = 1.09737 \times 10^7 m^{-1}$ So are there different Rydberg constants for heavy metals and hydrogen, or is it an incorrect/outdated definition? What is the correct way to understand the Rydberg constant?
http://hyperphysics.phy-astr.gsu.edu/hbase/hyde.html quote from here one for the sources of the wiki article "The reason for the variation of R is that for hydrogen the mass of the orbiting electron is not negligible compared to the proton at the high accuracy at which spectral measurement is done. So the reduced mass of the electron is needed. But for heavier elements the movement of the nucleus can be neglected." because the nucleus is so light in a hydrogen atom you have to take into account the way it moves but when you have fatter nuclei the movement will become negligible. thats the diffence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/579262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Radiative transfer equation for a three-level system I am trying to derive the radiative transfer equation for a three-level system, which is supposed to be given by: $\frac{dI(\omega,x)}{dx}+N [\alpha\rho_{11}-\beta( \rho_{22}+\rho_{33})]I(\omega,x)=\beta(\rho_{22}+\rho_{33})$, where $I(\omega,x)$ is the specific intensity by radiation emitted due to a $|3>$ to $|1>$ transition, $\rho$ is the density matrix of the system, N is the atom density, $\alpha$ is the absorption and $\beta$ the emission cross-section. From Monaco 1998 (https://doi.org/10.1080/00411459808205646) I found the corresponding equation for a two-level system. Can anyone help generalising this to the three-level case to arrive at the above equation?
The derivation is easy but a bit long, so I'm going to link a clear resource and just say the steps. Step 1: Write the total Hamiltonian of the system Step 2: perform rotating wave approximation and co-rotating frame to remove time dependence of problem. Your hamiltonian looks something like this now: Step 3: Calculate using the "Master Equation": $\dot{P} = [P, H]$ This will give you the diff. eq. you're looking for.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/579342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How much energy is transferred to a human hit by lightning? Wikipedia tells me that a bolt of lightning releases roughly 1 GJ of energy, but I'm guessing that's along the entire length of the bolt and that most of it is dissipated as heat and light to the surrounding atmosphere. Don't know much about the physics behind this, but assuming the bolt is 20km long that's about 50 KJ per meter, or 90 KJ for an average human. Or am I WAY off on my assumptions here?
Energy transferred by current is defined as : $$ E = I Q R $$ Where $I$ is current strength in amperes, $Q$ - transmitted charge and $R$ conductor resistance in $\text[ohms]$. Typical lightning bolt current is about $30~000 ~\text{[A]}$, and transmits $15 ~\text{[C]}$ charge. If lightning passes through internal body structures, then one needs to account for internal body electrical resistance which is about $1000 ~\Omega$. Putting these into equation gives about $\bf {450 ~\text{MJ}}$ of transferred electrical energy. EDIT Above info fits situation when cloud discharges electrons to ground, i.e. so called negative lightning. But 5% of lightning strikes are positive ones, when a cloud discharges positive charges to ground, i.e. electrons move upwards from ground to cloud. Large bolts of positive lightning can carry up to $120~000~\text{[A]}$ current and $350~\text{[C]}$ charge. In this case one gets positive lightning strike energy about $\bf {42~\text{GJ}}$. Thus positive strikes are a lot more dangerous than a negative ones.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/579852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }