proba
float64 0.5
1
| text
stringlengths 16
174k
|
---|---|
0.998871 |
Build and Release Management task to trigger a Azure DevOps release or build pipeline.
Trigger Azure DevOps Pipeline is an extension for triggering a Azure DevOps Build or Release Pipeline.
Depending on your choice in the task it will trigger a build or a release pipeline.
Personal Access Token: The personal access token.
Azure DevOps service connection: The service connection that you have configured.
Project: The project where the pipeline resides.
Pipeline type: The type of pipeline (build / release).
Release Definition: The definition to trigger.
Release description: Description for the release. Can also be empty.
Build Number: If you want to use a specific build number. It will set the build number for the primary artifact. When left empty it will take the latest build.
Build Definition: The build definition to trigger.
Branch: The name of the branch to trigger. When left empty it will use the default configured branch.
If you like the extension, please leave a review. File an issue when you have suggestions or if you experience problems with the extension.
|
0.935387 |
As with any addictive drug, a host of unpleasant symptoms come along with withdrawal from cocaine. Cocaine withdrawal does differ from withdrawal from substances like alcohol or heroin in that there are few physical symptoms. The symptoms primarily affect a person's psychological well-being, placing the individual at increased risk of depression or suicide during the withdrawal process. For this reason, it's essential to go through withdrawal under the guidance of a professional and successful addiction recovery program.
The symptoms of cocaine withdrawal manifest primarily in the mind, often in a severe reversal from the high initially sought through drug use. An individual may feel agitated, depressed, exhausted, generally uncomfortable, or some combination thereof. The addict's appetite will return, and they may have unpleasant dreams. If the individual uses cocaine again, the high itself may bring on the withdrawal symptoms of cocaine, producing feelings of fear and paranoia rather than the emotional high previously experienced. It should be noted that there are a few cocaine withdrawal symptoms that are physical in nature. Tremors, chills, and muscle pain can all appear and can aggravate any existing psychological symptoms. Individuals may find it difficult or impossible to feel arousal or pleasure and may experience unusual fatigue after any kind of physical activity.
What Is the Withdrawal Process?
The good news is that cocaine withdrawal symptoms typically abate within two weeks of sobriety. Users will have already experienced a milder version of cocaine withdrawal at the end of their first high. At the beginning of withdrawal from cocaine, there is the initial "crash," when the artificial high fades and is replaced by feelings of extreme depression, irritability, and lethargy. The second step in the process is to manage cravings over the next three months until the cravings diminish in strength or disappear. Even as the withdrawal symptoms of cocaine fade, the craving for the drug may resurface. It is during this phase when individuals are at the highest risk of relapse, which is manageable under the care and supervision of a professional counselor or addiction recovery program.
Cocaine withdrawal symptoms affect the mind and mood of the individual, creating an environment that can be extremely difficult to navigate alone. Previous users may be tempted to return to the drug to alleviate the symptoms of cocaine withdrawal. This can result in accidental overdoses, which can be fatal. Individuals may also seek to treat the withdrawal symptoms of cocaine with other drugs, such as alcohol or sedatives. The risk with this option is that the individual simply transfers dependence on cocaine to the new drug and a new cycle of addiction begins.
Cocaine withdrawal can be dangerous if you are attempting to deal with these symptoms without support. At BetterAddictionCare, we will work to help find the best program for you through our nationwide recovery network. Withdrawal may not be comfortable, but it can be safe and individualized to meet specific needs. Fill out our contact form and we'll help you find and speak with a counselor near you.
|
0.994119 |
Danny Masterson is an actor best known for his role on That '70s Show .
When did Danny Masterson join Scientology?
Danny is a second-generation Scientologist; his parents, Peter and Carol Masterson, are also Scientologists, as is Danny's brother Chris.
What level has Danny Masterson reached in Scientology?
According to Scientology service completions lists, he has mostly done introductory level services. He has attested to Grade 3 Expanded, done the Scientology Drug Rundown, and completed the Happiness Rundown.
Celebrity 270 , 1993, mentions Danny and his brother Chris working on the film Beethoven II and goes on to say, "Both Chris and Danny have completed the Key to Life and Life Orientation Courses at CC Int, and have been getting steady work since then!"
Celebrity 288 , 1995, also mentions that both of them have done the Key to Life and Life Orientation Course.
Has Danny Masterson received any special recognition in Scientology?
Does Danny Masterson promote Scientology?
As noted in an insert to Celebrity 317 , 1999, he spoke at the Career Success Weekend at Scientology's Celebrity Centre, along with Marissa Ribisi and Nancy Cartwright. (According to Celebrity 338 , the career seminars include booklets on Scientology teachings.) He has also appeared at benefit events at Scientology's Celebrity Centre.
Is Danny Masterson involved with any Scientology front groups?
Yes, he is a supporter of CCHR. According to Scientology press releases, he attended the 36th Anniversary Celebration of CCHR and the opening of the controversial "Psychiatry: Industry of Death" Museum. In addition, according to Celebrity 350 , he "protested the psychiatric drugging of children at 14th Annual MuchMusic Video Awards".
|
0.944932 |
Visiting two world Cultural Heritage sites: Itsukushima Shrine and A-bomb Dome in Hiroshima city.
Hiroshima is a central city in western Japan. It has been an important place to both land and sea transportation since days of old. There are many sight-seeing spots including shrines, temples, castle ruins, and historical buildings. Among the older things - Ancient burial mounds; and the ruling Heike clan in the Heian era, followed later by the Mori clan are among the older things in Hiroshima. Hiroshima was the setting, where many important historical events played out. Modern Hiroshima has been an important strategic point for naval operations and has come to be the base for heavy industries in Western Japan.
|
0.922579 |
Physicians explain the benefits they find in including a patient’s family and friends in conversations.
An essential feature of a systematic psychiatric evaluation is historical information obtained from multiple sources – individuals who know the patient or some aspect of the patient well.
A strong social support is imperative to the patient's outcome.
In sleep clinic, the person in the room is typically the "witness" and is an important historian, so hearing their input is very important.
I think it’s important to finish the encounter with an open discussion eliciting input from the patient and their love one.
I have found that including family members in the conversation leads to more shared decision making.
An essential feature of a systematic psychiatric evaluation is historical information obtained from multiple sources – individuals (e.g., family, friends, other clinicians) who know the patient or some aspect of the patient well.
The primary purpose of talking to other people who know the patient is to expand on and enhance the history that the patient provides, as the patient’s point of view can be limited and/or skewed for various reasons, including psychiatric illness. For example, a depressed mood could affect the patient’s recounting of their life story and make aspects of the history inaccurate; it could lead the patient to distort memories of past feelings and events.
A clinician needs to obtain information from a variety of sources in order to understand how others see the patient and the patient’s history. This enables the clinician to form a more complete understanding of the patient and their condition.
From my decades of experience evaluating patients previously assessed by other clinicians, I have found that most other mental health clinicians do not gather information from additional sources. Yet, they should.
I encourage patients to find a clinician who includes a friend, family member, and/or another clinician in their psychiatric evaluation, even if the patient is an adult seeking individual outpatient psychotherapy.
In fact, being in individual psychotherapy with a clinician who does not understand the patient in the context of their history hampers the psychotherapy from the outset and represents a risk to the patient and their mental well-being.
In my experience, most patients and their family members are receptive to and appreciative of these discussions. Early on in my career, I did not always made these contacts as part of the initial evaluation, but –over time – have come to appreciate their value in understanding the nature and origin of my patients’ troubles.
I always confirm that the patient is happy with their family and friends staying in the room.
Next, I always love receiving collateral information on a disease or condition, so, with the patient’s blessing, I ask the family members for additional insight.
Finally, I make sure everyone’s questions and comments are answered and addressed.
Most important for me, I make sure the family and friends recognize they are as important to the patient’s disease management as any of my treatments. A strong social support is imperative to the patient’s outcome.
In sleep clinic, the person in the room is typically the “witness” and is an important historian, so hearing their input is very important.
Whenever a patient comes with family members or a friend, I always try to take advantage of the situation. After all, it’s an opportunity to gather information about the patient’s habits that I wouldn’t be able to get, otherwise.
At the beginning, I always try to keep my attention on the patient, at the same time I acknowledge the importance of the companion. I also try to use verbal and non-verbal communication to set some boundaries during the clinical encounter.
After I’ve taken enough time to assess the nature of their relationship, I try to include the other person and ask their opinion, if it’s appropriate. I think it’s important to finish the encounter with an open discussion eliciting input from the patient and their love one.
My patients frequently come to their visits with family and/or friends. I like to direct specific questions to the family member or friend to include them in the conversation. Frequently, the family member knows more about the patient’s health than the patient! I have found that including family members in the conversation leads to more shared decision making.
|
0.991561 |
Ping Yang, Jan David Brehm, Juha Leppäkangas, Lingzhen Guo, Michael Marthaler, Isabella Boventer, Alexander Stehli, Tim Wolz, Alexey V. Ustinov, Martin Weides, et al.
We demonstrate the local control of up to eight two-level systems interacting strongly with a microwave cavity. Following calibration, the frequency of each individual two-level system (qubit) is tunable without influencing the others. Bringing the qubits one by one on resonance with the cavity, we observe the collective coupling strength of the qubit ensemble. The splitting scales up with the square root of the number of the qubits, being the hallmark of the Tavis-Cummings model. The local control circuitry causes a bypass shunting the resonator, and a Fano interference in the microwave readout, whose contribution can be calibrated away to recover the pure cavity spectrum. The simulator's attainable size of dressed states is limited by reduced signal visibility, and -if uncalibrated- by off-resonance shifts of sub-components. Our work demonstrates control and readout of quantum coherent mesoscopic multi-qubit system of intermediate scale under conditions of noise.
Artificial neural networks are revolutionizing science. While the most prevalent technique involves supervised training on queries with a known correct answer, more advanced challenges often require discovering answers autonomously. In reinforcement learning, control strategies are improved according to a reward function. The power of this approach has been highlighted by spectactular recent successes, such as playing Go. So far, it has remained an open question whether neural-network-based reinforcement learning can be successfully applied in physics. Here, we show how to use this method for finding quantum feedback schemes, where a network-based "agent" interacts with and occasionally decides to measure a quantum system. We illustrate the utility by finding gate sequences that preserve the quantum information stored in a small collection of qubits against noise. This specific application will help to find hardware-adapted feedback schemes for small quantum modules while demonstrating more generally the promise of neural-network based reinforcement learning in physics.
We analyze the optical resonances of a dielectric sphere whose surface has been slightly deformed in an arbitrary way. Setting up a perturbation series up to second order, we derive both the frequency shifts and modified linewidths. Our theory is applicable, for example, to freely levitated liquid drops or solid spheres, which are deformed by thermal surface vibrations, centrifugal forces or arbitrary surface waves. A dielectric sphere is effectively an open system whose description requires the introduction of non-Hermitian operators characterized by complex eigenvalues and not normalizable eigenfunctions. We avoid these difficulties using the Kapur-Peierls formalism which enables us to extend the popular Rayleigh-Schrödinger perturbation theory to the case of electromagnetic Debye's potentials describing the light fields inside and outside the near-spherical dielectric object. We find analytical formulas, valid within certain limits, for the deformation-induced first- and second-order corrections to the central frequency and bandwidth of a resonance. As an application of our method, we compare our results with preexisting ones finding full agreement.
The fields of optomechanics and electromechanics have facilitated numerous advances in the areas of precision measurement and sensing, ultimately driving the studies of mechanical systems into the quantum regime. To date, however, the quantization of the mechanical motion and the associated quantum jumps between phonon states remains elusive. For optomechanical systems, the coupling to the environment was shown to make the detection of the mechanical mode occupation difficult, typically requiring the single-photon strong-coupling regime. Here, we propose and analyse an electromechanical setup, which allows us to overcome this limitation and resolve the energy levels of a mechanical oscillator. We found that the heating of the membrane, caused by the interaction with the environment and unwanted couplings, can be suppressed for carefully designed electromechanical systems. The results suggest that phonon number measurement is within reach for modern electromechanical setups.
We demonstrate how heating of an environment can invert the line shape of a driven cavity. We consider a superconducting coplanar cavity coupled to multiple artificial atoms. The measured cavity transmission is characterized by Fano-type resonances with a shape that is continuously tunable by bias current through nearby (magnetic flux) control lines. In particular, the same dispersive shift of the microwave cavity can be observed as a peak or a dip. We find that this Fano-peak inversion is possible due to a tunable interference between a microwave transmission through a background, with reactive and dissipative properties, and through the cavity, affected by bias-current induced heating. The background transmission occurs due to crosstalk with the multiple control lines. We show how such background can be accounted for by a Jaynes- or Tavis-Cummings model with modified boundary conditions between the cavity and transmission-line microwave fields. A dip emerges when cavity transmission is comparable with background transmission and dissipation. We find generally that resonance positions determine system energy levels, whereas resonance shapes give information on system fluctuations and dissipation.
Static synthetic magnetic fields give rise to phenomena including the Lorentz force and the quantum Hall effect even for neutral particles, and they have by now been implemented in a variety of physical systems. Moving towards fully dynamical synthetic gauge fields allows, in addition, for backaction of the particles' motion onto the field. If this results in a time-dependent vector potential, conventional electromagnetism predicts the generation of an electric field. Here, we show how synthetic electric fields for photons arise self-consistently due to the nonlinear dynamics in a driven system. Our analysis is based on optomechanical arrays, where dynamical gauge fields arise naturally from phonon-assisted photon tunneling. We study open, one-dimensional arrays, where synthetic magnetic fields are absent. However, we show that synthetic electric fields can be generated dynamically, which, importantly, suppress photon transport in the array. The generation of these fields depends on the direction of photon propagation, leading to a novel mechanism for a photon diode, inducing nonlinear nonreciprocal transport via dynamical synthetic gauge fields.
According to the world view of macrorealism, the properties of a given system exist prior to and independent of measurement, which is incompatible with quantum mechanics. Leggett and Garg put forward a practical criterion capable of identifying violations of macrorealism, and so far experiments performed on microscopic and mesoscopic systems have always agreed with quantum mechanics. However, a macrorealist can always assign the cause of such violations to the perturbation that measurements effect on such small systems, and hence a definitive test would require using noninvasive measurements, preferably on macroscopic objects, where such measurements seem more plausible. However, the generation of truly macroscopic quantum superposition states capable of violating macrorealism remains a big challenge. In this work we propose a setup that makes use of measurements on the polarization of light, a property that has been extensively manipulated both in classical and quantum contexts, hence establishing the perfect link between the microscopic and macroscopic worlds. In particular, we use Leggett-Garg inequalities and the criterion of no signaling in time to study the macrorealistic character of light polarization for different kinds of measurements, in particular with different degrees of coarse graining. Our proposal is noninvasive for coherent input states by construction. We show for states with well-defined photon number in two orthogonal polarization modes, that there always exists a way of making the measurement sufficiently coarse grained so that a violation of macrorealism becomes arbitrarily small, while sufficiently sharp measurements can always lead to a significant violation.
We present the basic ingredients of continuum optomechanics, i.e. the suitable extension of cavity-optomechanical concepts to the interaction of photons and phonons in an extended waveguide. We introduce a real-space picture and argue which coupling terms may arise in leading order in the spatial derivatives. This picture allows us to discuss quantum noise, dissipation, and the correct boundary conditions at the waveguide entrance. The connections both to optomechanical arrays as well as to the theory of Brillouin scattering in waveguides are highlighted. Among other examples, we analyze the 'strong coupling regime' of continuum optomechanics that may be accessible in future experiments.
Type II optical parametric oscillators are amongst the highest-quality sources of quantum-correlated light. In particular, when pumped above threshold, such devices generate a pair of bright orthogonally-polarized beams with strong continuous-variable entanglement. However, these sources are of limited practical use, because the entangled beams emerge with different frequencies and a diffusing phase difference. It has been proven that the use of an internal wave-plate coupling the modes with orthogonal polarization is capable of locking the frequencies of the emerging beams to half the pump frequency, as well as reducing the phase-difference diffusion, at the expense of reducing the entanglement levels. In this work we characterize theoretically an alternative locking mechanism: the injection of a laser at half the pump frequency. Apart from being less invasive, this method should allow for an easier real-time experimental control. We show that such an injection is capable of generating the desired phase locking between the emerging beams, while still allowing for large levels of entanglement. Moreover, we find an additional region of the parameter space (at relatively large injections) where a mode with well defined polarization is in a highly amplitude-squeezed state.
We show how the snowflake phononic crystal structure, which recently has been realized experimentally, can be turned into a topological insulator for mechanical waves. This idea, based purely on simple geometrical modifications, could be readily implemented on the nanoscale.
We propose a scalable ion trap architecture for universal quantum computation, which is composed of an array of ion traps with one ion confined in each trap. The neighboring traps are designed capable of merging into one single trap. The universal two-qubit SWAP−−−−−−√ gate is realized by direct collision of two neighboring ions in the merged trap, which induces an effective spin-spin interaction between two ions. We find that the collision-induced spin-spin interaction decreases with the third power of two ions' trapping distance. Even with a 200 μm trapping distance between atomic ions in Paul traps, it is still possible to realize a two-qubit gate operation with speed in 0.1 kHz regime. The speed can be further increased up into 0.1 MHz regime using electrons with 10 mm trapping distance in Penning traps.
We describe a proposal for a type of optomechanical system based on a drop of liquid helium that ismagnetically levitated in vacuum. In the proposed device, the drop would serve three roles: its optical whispering-gallery modes would provide the optical cavity, its surface vibrations would constitute the mechanical element, and evaporation of He atoms from its surface would provide continuous refrigeration. We analyze the feasibility of such a system in light of previous experimental demonstrations of its essential components: magnetic levitation of mm-scale and cm-scale drops of liquid He, evaporative cooling of He droplets in vacuum, and coupling to high-quality optical whispering-gallery modes in a wide range of liquids. We find that the combination of these features could result in a device that approaches the single-photon strong-coupling regime, due to the high optical quality factors attainable at low temperatures. Moreover, the system offers a unique opportunity to use optical techniques to study the motion of a superfluid that is freely levitating in vacuum (in the case of He-4). Alternatively, for a normal fluid drop of He-3, we propose to exploit the coupling between the drop's rotations and vibrations to perform quantum nondemolition measurements of angular momentum.
Topology has appeared in different physical contexts. The most prominent application is topologically protected edge transport in condensed matter physics. The Chern number, the topological invariant of gapped Bloch Hamiltonians, is an important quantity in this field. Another example of topology, in polarization physics, are polarization singularities, called L lines and C points. By establishing a connection between these two theories, we develop a novel technique to visualize and potentially measure the Chern number: it can be expressed either as the winding of the polarization azimuth along L lines in reciprocal space, or in terms of the handedness and the index of the C points. For mechanical systems, this is directly connected to the visible motion patterns.
Phase oscillator lattices subject to noise are one of the most fundamental systems in nonequilibrium physics. We have discovered a dynamical transition which has a significant impact on the synchronization dynamics in such lattices, as it leads to an explosive increase of the phase diffusion rate by orders of magnitude. Our analysis is based on the widely applicable Kuramoto-Sakaguchi model, with local couplings between oscillators. For one-dimensional lattices, we observe the universal evolution of the phase spread that is suggested by a connection to the theory of surface growth, as described by the Kardar-Parisi-Zhang (KPZ) model. Moreover, we are able to explain the dynamical transition both in one and two dimensions by connecting it to an apparent finite-time singularity in a related KPZ lattice model. Our findings have direct consequences for the frequency stability of coupled oscillator lattices.Phase oscillator lattices subject to noise are one of the most fundamental systems in nonequilibrium physics. We have discovered a dynamical transition which has a significant impact on the synchronization dynamics in such lattices, as it leads to an explosive increase of the phase diffusion rate by orders of magnitude. Our analysis is based on the widely applicable Kuramoto-Sakaguchi model, with local couplings between oscillators. For one-dimensional lattices, we observe the universal evolution of the phase spread that is suggested by a connection to the theory of surface growth, as described by the Kardar-Parisi-Zhang (KPZ) model. Moreover, we are able to explain the dynamical transition both in one and two dimensions by connecting it to an apparent finite-time singularity in a related KPZ lattice model. Our findings have direct consequences for the frequency stability of coupled oscillator lattices.
Diego Guzman-Silva, Robert Bruening, Felix Zimmermann, Christian Vetter, Markus Graefe, Matthias Heinrich, Stefan Nolte, Michael Duparre, Andrea Aiello, Marco Ornigotti, et al.
We report on an optical implementation of the teleportation protocol in the classical realm, solely based on entanglement between spatial and modal degrees of freedom of a purely classical light field.
There is a growing effort in creating chiral transport of sound waves. However, most approaches so far have been confined to the macroscopic scale. Here, we propose an approach suitable to the nanoscale that is based on pseudomagnetic fields. These pseudomagnetic fields for sound waves are the analogue of what electrons experience in strained graphene. In our proposal, they are created by simple geometrical modifications of an existing and experimentally proven phononic crystal design, the snowflake crystal. This platform is robust, scalable, and well-suited for a variety of excitation and readout mechanisms, among them optomechanical approaches.
Recently, several studies have investigated synchronization in quantum-mechanical limit-cycle oscillators. However, the quantum nature of these systems remained partially hidden, since the dynamics of the oscillator's phase was overdamped and therefore incoherent. We show that there exist regimes of underdamped and even quantum-coherent phase motion, opening up new possibilities to study quantum synchronization dynamics. To this end, we investigate the Van der Pol oscillator (a paradigm for a self-oscillating system) synchronized to an external drive. We derive an effective quantum model which fully describes the regime of underdamped phase motion and additionally allows us to identify the quality of quantum coherence. Finally, we identify quantum limit cycles of the phase itself.
Synchronously pumped optical parametric oscillators (SPOPOs) are optical cavities driven by mode-locked lasers, and containing a nonlinear crystal capable of down-converting a frequency comb to lower frequencies. SPOPOs have received a lot of attention lately because their intrinsic multimode nature makes them compact sources of quantum correlated light with promising applications in modern quantum information technologies. In this work we show that SPOPOs are also capable of accessing the challenging and interesting regime where spontaneous symmetry breaking confers strong nonclassical properties to the emitted light, which has eluded experimental observation so far. Apart from opening the possibility of studying experimentally this elusive regime of dissipative phase transitions, our predictions will have a practical impact, since we show that spontaneous symmetry breaking provides a specific spatiotemporal mode with large quadrature squeezing for any value of the system parameters, turning SPOPOs into robust sources of highly nonclassical light above threshold.
After a quench in a quantum many-body system, expectation values tend to relax towards long-time averages. However, temporal fluctuations remain in the long-time limit, and it is crucial to study the suppression of these fluctuations with increasing system size. The particularly important case of nonintegrable models has been addressed so far only by numerics and conjectures based on analytical bounds. In this work, we are able to derive analytical predictions for the temporal fluctuations in a nonintegrable model (the transverse Ising chain with extra terms). Our results are based on identifying a dynamical regime of "many-particle dephasing,"where quasiparticles do not yet relax but fluctuations are nonetheless suppressed exponentially by weak integrability breaking.
Optomechanical (OMA) arrays are a promising future platform for studies of transport, many-body dynamics, quantum control and topological effects in systems of coupled photon and phonon modes. We introduce disordered OMA arrays, focusing on features of Anderson localization of hybrid photon-phonon excitations. It turns out that these represent a unique disordered system, where basic parameters can be easily controlled by varying the frequency and the amplitude of an external laser field. We show that the two-species setting leads to a non-trivial frequency dependence of the localization length for intermediate laser intensities. This could serve as a convincing evidence of localization in a non-equilibrium dissipative situation.
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Optomechanical systems driven by an effective blue-detuned laser can exhibit self-sustained oscillations of the mechanical oscillator. These self-oscillations are a prerequisite for the observation of synchronization. Here, we study the synchronization of the mechanical oscillations to an external reference drive. We study two cases of reference drives: (1) an additional laser applied to the optical cavity; (2) a mechanical drive applied directly to the mechanical oscillator. Starting from a master equation description, we derive a microscopic Adler equation for both cases, valid in the classical regime in which the quantum shot noise of the mechanical self-oscillator does not play a role. Furthermore, we numerically show that, in both cases, synchronization arises also in the quantum regime. The optomechanical system is therefore a good candidate for the study of quantum synchronization.
Synthetic magnetism has been used to control charge neutral excitations for applications ranging from classical beam steering to quantum simulation. In optomechanics, radiation-pressure-induced parametric coupling between optical (photon) and mechanical (phonon) excitations may be used to break time-reversal symmetry, providing the prerequisite for synthetic magnetism. Here we design and fabricate a silicon optomechanical circuit with both optical and mechanical connectivity between two optomechanical cavities. Driving the two cavities with phase-correlated laser light results in a synthetic magnetic flux, which, in combination with dissipative coupling to the mechanical bath, leads to non-reciprocal transport of photons with 35 dB of isolation. Additionally, optical pumping with blue-detuned light manifests as a particle non-conserving interaction between photons and phonons, resulting in directional optical amplification of 12 dB in the isolator through-direction. These results suggest the possibility of using optomechanical circuits to create a more general class of non-reciprocal optical devices, and further, to enable new topological phases for both light and sound on a microchip.
We study how quantum and thermal noise affects synchronization of two optomechanical limit-cycle oscillators. Classically, in the absence of noise, optomechanical systems tend to synchronize either in-phase or anti-phase. Taking into account the fundamental quantum noise, we find a regime where fluctuations drive transitions between these classical synchronization states. We investigate how this 'mixed' synchronization regime emerges from the noiseless system by studying the classical-to-quantum crossover and we show how the time scales of the transitions vary with the effective noise strength. In addition, we compare the effects of thermal noise to the effects of quantum noise.
There is enormous interest in engineering topological photonic systems. Despite intense activity, most works on topological photonic states (and more generally bosonic states) amount in the end to replicating a well-known fermionic single-particle Hamiltonian. Here we show how the squeezing of light can lead to the formation of qualitatively new kinds of topological states. Such states are characterized by non-trivial Chern numbers, and exhibit protected edge modes, which give rise to chiral elastic and inelastic photon transport. These topological bosonic states are not equivalent to their fermionic (topological superconductor) counterparts and, in addition, cannot be mapped by a local transformation onto topological states found in particle-conserving models. They thus represent a new type of topological system. We study this physics in detail in the case of a kagome lattice model, and discuss possible realizations using nonlinear photonic crystals or superconducting circuits.
Artificial gauge fields for neutral particles such as photons, recently attracted a lot of attention in various fields ranging from photonic crystals to ultracold atoms in optical lattices to optomechanical arrays. Here we point out that, among all implementations of gauge fields, the optomechanical setting allows for the most natural extension where the gauge field becomes dynamical. The mechanical oscillation phases determine the effective artificial magnetic field for the photons, and once these phases are allowed to evolve, they respond to the flow of photons in the structure. We discuss a simple three-site model where we identify four different regimes of the gauge-field dynamics. Furthermore, we extend the discussion to a two-dimensional lattice. Our proposed scheme could for instance be implemented using optomechanical crystals.
We use a reservoir engineering technique based on two-tone driving to generate and stabilize a quantum squeezed state of a micron-scale mechanical oscillator in a microwave optomechanical system. Using an independent backaction-evading measurement to directly quantify the squeezing, we observe 4.7±0.9 dB of squeezing below the zero-point level surpassing the 3 dB limit of standard parametric squeezing techniques. Our measurements also reveal evidence for an additional mechanical parametric effect. The interplay between this effect and the optomechanical interaction enhances the amount of squeezing obtained in the experiment.
We derive a general expression that quantifies the total entanglement production rate in continuous variable systems, where a source emits two entangled Gaussian beams with arbitrary correlators. This expression is especially useful for situations where the source emits an arbitrary frequency spectrum, e.g. when cavities are involved. To exemplify its meaning and potential, we apply it to a four-mode optomechanical setup that enables the simultaneous up- and down-conversion of photons from a drive laser into entangled photon pairs. This setup is efficient in that both the drive and the optomechanical up- and down-conversion can be fully resonant.
It is now well established that photonic systems can exhibit topological energy bands. Similar to their electronic counterparts, this leads to the formation of chiral edge modes which can be used to transmit light in a manner that is protected against backscattering. While it is understood how classical signals can propagate under these conditions, it is an outstanding important question how the quantum vacuum fluctuations of the electromagnetic field get modified in the presence of a topological band structure. We address this challenge by exploring a setting where a nonzero topological invariant guarantees the presence of a parametrically unstable chiral edge mode in a system with boundaries, even though there are no bulk-mode instabilities. We show that one can exploit this to realize a topologically protected, quantum-limited traveling wave parametric amplifier. The device is naturally protected against both internal losses and backscattering; the latter feature is in stark contrast to standard traveling wave amplifiers. This adds a new example to the list of potential quantum devices that profit from topological transport.
Topological states of matter are particularly robust, since they exploit global features of a material's band structure. Topological states have already been observed for electrons, atoms, and photons. It is an outstanding challenge to create a Chern insulator of sound waves in the solid state. In this work, we propose an implementation based on cavity optomechanics in a photonic crystal. The topological properties of the sound waves can be wholly tuned in situ by adjusting the amplitude and frequency of a driving laser that controls the optomechanical interaction between light and sound. The resulting chiral, topologically protected phonon transport can be probed completely optically. Moreover, we identify a regime of strong mixing between photon and phonon excitations, which gives rise to a large set of different topological phases and offers an example of a Chern insulator produced from the interaction between two physically distinct particle species, photons and phonons.
Recent progress in optomechanical systems may soon allow the realization of optomechanical arrays, i.e. periodic arrangements of interacting optical and vibrational modes. We show that photons and phonons on a honeycomb lattice will produce an optically tunable Dirac-type band structure. Transport in such a system can exhibit transmission through an optically created barrier, similar to Klein tunneling, but with interconversion between light and sound. In addition, edge states at the sample boundaries are dispersive and enable controlled propagation of photon-phonon polaritons.
Arrays of coupled limit-cycle oscillators represent a paradigmatic example for studying synchronization and pattern formation. We find that the full dynamical equations for the phase dynamics of a limit-cycle oscillator array go beyond previously studied Kuramoto-type equations. We analyze the evolution of the phase field in a two-dimensional array and obtain a "phase diagram" for the resulting stationary and nonstationary patterns. Our results are of direct relevance in the context of currently emerging experiments on nano-and optomechanical oscillator arrays, as well as for any array of coupled limit-cycle oscillators that have undergone a Hopf bifurcation. The possible observation in optomechanical arrays is discussed briefly.
Extensive efforts have been expended in developing hybrid quantum systems to overcome the short coherence time of superconducting circuits by introducing the naturally long-lived spin degree of freedom. Among all the possible materials, single-crystal yttrium iron garnet has shown up recently as a promising candidate for hybrid systems, and various highly coherent interactions, including strong and even ultrastrong coupling, have been demonstrated. One distinct advantage in these systems is that spins form well-defined magnon modes, which allows flexible and precise tuning. Here we demonstrate that by dissipation engineering, a non-Markovian interaction dynamics between the magnon and the microwave cavity photon can be achieved. Such a process enables us to build a magnon gradient memory to store information in the magnon dark modes, which decouple from the microwave cavity and thus preserve a long lifetime. Our findings provide a promising approach for developing long-lifetime, multimode quantum memories.
We present the design, fabrication, and characterization of a planar silicon photonic crystal cavity in which large position-squared optomechanical coupling is realized. The device consists of a double-slotted photonic crystal structure in which motion of a central beam mode couples to two high-Q optical modes localized around each slot. Electrostatic tuning of the structure is used to controllably hybridize the optical modes into supermodes that couple in a quadratic fashion to the motion of the beam. From independent measurements of the anticrossing of the optical modes and of the dynamic optical spring effect, a position-squared vacuum coupling rate as large as (g) over tilde'/2 pi = 245 Hz is inferred between the optical supermodes and the fundamental in-plane mechanical resonance of the structure at omega(m)/2 pi = 8.7 MHz, which in displacement units corresponds to a coupling coefficient of g'/2 pi = 1 THz/nm(2). For larger supermode splittings, selective excitation of the individual optical supermodes is used to demonstrate optical trapping of the mechanical resonator with measured (g) over tilde'/2 pi = 46 Hz.
Utilizing a silicon nanobeam optomechanical crystal, we investigate the attractor diagram arising from the radiation pressure interaction between a localized optical cavity at lambda(c) = 1542 nm and a mechanical resonance at omega(m)/2 pi = 3.72 GHz. At a temperature of T-b approximate to 10 K, highly nonlinear driving of mechanical motion is observed via continuous wave optical pumping. Introduction of a time-dependent (modulated) optical pump is used to steer the system towards an otherwise inaccessible dynamically stable attractor in which mechanical self-oscillation occurs for an optical pump red detuned from the cavity resonance. An analytical model incorporating thermo-optic effects due to optical absorption heating is developed and found to accurately predict the measured device behavior.
We discuss how large amounts of steady-state quantum squeezing (beyond 3 dB) of a mechanical resonator can be obtained by driving an optomechanical cavity with two control lasers with differing amplitudes. The scheme does not rely on any explicit measurement or feedback, nor does it simply involve a modulation of an optical spring constant. Instead, it uses a dissipative mechanism with the driven cavity acting as an engineered reservoir. It can equivalently be viewed as a coherent feedback process, obtained by minimally perturbing the quantum nondemolition measurement of a single mechanical quadrature. This shows that in general the concepts of coherent feedback schemes and reservoir engineering are closely related. We analyze how to optimize the scheme, how the squeezing scales with system parameters, and how it may be directly detected from the cavity output. Our scheme is extremely general, and could also be implemented with, e.g., superconducting circuits.
We study several dynamical properties of a recently proposed implementation of the quantum transverse-field Ising chain in the framework of circuit quantum electrodynamics (QED). Particular emphasis is placed on the effects of disorder on the nonequilibrium behavior of the system. We show that small amounts of fabrication-induced disorder in the system parameters do not jeopardize the observation of previously predicted phenomena. Based on a numerical extraction of the mean free path of a wave packet in the system, we also provide a simple quantitative estimate for certain disorder effects on the nonequilibrium dynamics of the circuit QED quantum simulator. We discuss the transition from weak to strong disorder, characterized by the onset of Anderson localization of the system's wave functions, and the qualitatively different dynamics it leads to.
Optomechanical systems couple light to the motion of nanomechanical objects. Intriguing new effects are observed in recent experiments that involve the dynamics of more than one optical mode. There, mechanical motion can stimulate strongly driven multi-mode photon dynamics that acts back on the mechanics via radiation forces. We show that even for two optical modes Landau-Zener-Stueckelberg oscillations of the light field drastically change the nonlinear attractor diagram of the resulting phonon lasing oscillations. Our findings illustrate the generic effects of Landau-Zener physics on back-action induced self-oscillations.
We study the nonlinear driven dissipative quantum dynamics of an array of optomechanical systems. At each site of such an array, a localized mechanical mode interacts with a laser-driven cavity mode via radiation pressure, and both photons and phonons can hop between neighboring sites. The competition between coherent interaction and dissipation gives rise to a rich phase diagram characterizing the optical and mechanical many-body states. For weak intercellular coupling, the mechanical motion at different sites is incoherent due to the influence of quantum noise. When increasing the coupling strength, however, we observe a transition towards a regime of phase-coherent mechanical oscillations. We employ a Gutzwiller ansatz as well as semiclassical Langevin equations on finite lattices, and we propose a realistic experimental implementation in optomechanical crystals.
We consider the nonequilibrium dynamics of an interacting spin-1/2 fermion gas in a one-dimensional optical lattice after switching off the confining potential. In particular, we study the creation and the time evolution of spatially separated, spin-entangled fermionic pairs. The time-dependent density-matrix renormalization group is used to simulate the time evolution and evaluate the two-site spin correlation functions, from which the concurrence is calculated. We find that the typical distance between entangled fermions depends crucially on the onsite interaction strength, and that a time-dependent modulation of the tunnelling amplitude can enhance the production of spin entanglement. Moreover, we discuss the prospects of experimentally observing these phenomena using spin-dependent single-site detection.
Synchronization in oscillatory systems is a frequent natural phenomenon and is becoming an important concept in modern physics. Nanomechanical resonators are ideal systems for studying synchronization due to their controllable oscillation properties and engineerable nonlinearities. Here we demonstrate synchronization of two nanomechanical oscillators via a photonic resonator, enabling optomechanical synchronization between mechanically isolated nanomechanical resonators. Optical backaction gives rise to both reactive and dissipative coupling of the mechanical resonators, leading to coherent oscillation and mutual locking of resonators with dynamics beyond the widely accepted phase oscillator (Kuramoto) model. In addition to the phase difference between the oscillators, also their amplitudes are coupled, resulting in the emergence of sidebands around the synchronized carrier signal.
We study the optical cooling of the cavity mirror in an active laser cavity. We find that the optical damping rate is vanishingly small for an incoherently pumped laser above threshold. In the presence of an additional external coherent drive however, the optical damping rate can be enhanced substantially with respect to that of a passive cavity. We show that the strength of the incoherent pump provides the means to tune the optical damping rate and the steady state phonon number. The system is found to undergo a transition from the weak optomechanical coupling regime to the strong optomechanical coupling regime as the strength of the incoherent pump is varied.
The use of levitated nanospheres represents a new paradigm for the optomechanical cooling of a small mechanical oscillator, with the prospect of realizing quantum oscillators with unprecedentedly high quality factors. We investigate the dynamics of this system, especially in the so-called self-trapping regime, where one or more optical fields simultaneously trap and cool the mechanical oscillator. The determining characteristic of this regime is that both the mechanical frequency omega(M) and single-photon optomechanical coupling strength parameters g are a function of the optical field intensities, in contrast to usual set-ups where omega(M) and g are constant for the given system. We also measure the characteristic transverse and axial trapping frequencies of different sized silica nanospheres in a simple optical standing wave potential, for spheres of radii r = 20-500 nm, illustrating a protocol for loading single nanospheres into a standing wave optical trap that would be formed by an optical cavity. We use these data to confirm the dependence of the effective optomechanical coupling strength on sphere radius for levitated nanospheres in an optical cavity and discuss the prospects for reaching regimes of strong light-matter coupling. Theoretical semiclassical and quantum displacement noise spectra show that for larger nanospheres with r greater than or similar to 100 nm a range of interesting and novel dynamical regimes can be accessed. These include simultaneous hybridization of the two optical modes with the mechanical modes and parameter regimes where the system is bistable. We show that here, in contrast to typical single-optical mode optomechanical systems, bistabilities are independent of intracavity intensity and can occur for very weak laser driving amplitudes.
Optomechanical systems have been shown both theoretically and experimentally to exhibit an analogon to atomic electromagnetically induced transparency, with sharp transmission features that are controlled by a second laser beam. Here we investigate these effects in the regime where the fundamental nonlinear nature of the optomechanical interaction becomes important. We demonstrate that pulsed transistorlike switching of transmission still works even in this regime. We also show that optomechanically induced transparency at the second mechanical sideband could be a sensitive tool to see first indications of the nonlinear quantum nature of the optomechanical interaction even for single-photon coupling strengths significantly smaller than the cavity linewidth.
In the past few years, coupling strengths between light and mechanical motion in optomechanical setups have improved by orders of magnitude. Here we show that, in the standard setup under continuous laser illumination, the steady state of the mechanical oscillator can develop a nonclassical, strongly negative Wigner density if the optomechanical coupling is comparable to or larger than the optical decay rate and the mechanical frequency. Because of its robustness, such a Wigner density can be mapped using optical homodyne tomography. This feature is observed near the onset of the instability towards self-induced oscillations. We show that there are also distinct signatures in the photon-photon correlation function g((2))(t) in that regime, including oscillations decaying on a time scale not only much longer than the optical cavity decay time but even longer than the mechanical decay time.
We investigate the onset of "eigenstate thermalization" and the crossover to ergodicity in a system of one-dimensional fermions with increasing interaction. We analyze the fluctuations in the expectation values of most relevant few-body operators with respect to eigenstates. It turns out that these are intimately related to the inverse participation ratio of eigenstates displayed in the operator eigenbasis. Based on this observation, we find good evidence that eigenstate thermalization should set in even for vanishingly small perturbations in the thermodynamic limit.
Recent experiments have demonstrated single-site resolved observation of cold atoms in optical lattices. Thus, in the future it may be possible to take repeated snapshots of an interacting quantum many-body system during the course of its evolution. Here we address the impact of the resulting quantum (anti-)Zeno physics on the many-body dynamics. We use the time-dependent density-matrix renormalization group to obtain the time evolution of the full wave function, which is then periodically projected in order to simulate realizations of stroboscopic measurements. For the example of a one-dimensional lattice of spinless fermions with nearest-neighbor interactions, we find regimes for which many-particle configurations are stabilized or destabilized, depending on the interaction strength and the time between observations.
In cavity optomechanics, nanomechanical motion couples to a localized optical mode. The regime of single-photon strong coupling is reached when the optical shift induced by a single phonon becomes comparable to the cavity linewidth. We consider a setup in this regime comprising two optical modes and one mechanical mode. For mechanical frequencies nearly resonant to the optical level splitting, we find the photon-phonon and the photon-photon interactions to be significantly enhanced. In addition to dispersive phonon detection in a novel regime, this offers the prospect of optomechanical photon measurement. We study these quantum nondemolition detection processes using both analytical and numerical approaches.
We propose and analyze a nanomechanical architecture where light is used to perform linear quantum operations on a set of many vibrational modes. Suitable amplitude modulation of a single laser beam is shown to generate squeezing, entanglement and state transfer between modes that are selected according to their mechanical oscillation frequency. Current optomechanical devices based on photonic crystals, as well as other systems with sufficient control over multiple mechanical modes, may provide a platform for realizing this scheme.
We experimentally demonstrate the energy-reversed counterpart to Brillouin lasers, resulting in the cooling of Brownian surface-acoustic-wave whispering-gallery resonances by light in a silica microsphere resonator.
Optomechanical cooling of levitated dielectric particles represents a promising new approach in the quest to cool small mechanical resonators toward their quantum ground state. We investigate two-mode cooling of levitated nanospheres in a self-trapping regime. We identify a structure of overlapping, multiple cooling resonances and strong cooling even when one mode is blue-detuned. We show that the best regimes occur when both optical fields cooperatively cool and trap the nanosphere, where cooling rates are over an order of magnitude faster compared to corresponding single-resonance cooling rates.
Although bolometric- and ponderomotive-induced deflection of device boundaries are widely used for laser cooling, the electrostrictive Brillouin scattering of light from sound was considered an acousto-optical amplification-only process(1-7). It was suggested that cooling could be possible in multi-resonance Brillouin systems(5-8) when phonons experience lower damping than light(8). However, this regime was not accessible in electrostrictive Brillouin systems(1-3,5,6) as backscattering enforces high acoustical frequencies associated with high mechanical damping(1). Recently, forward Brillouin scattering(3) in microcavities(7) has allowed access to low-frequency acoustical modes where mechanical dissipation is lower than optical dissipation, in accordance with the requirements for cooling(8). Here we experimentally demonstrate cooling via such a forward Brillouin process in a microresonator. We show two regimes of operation for the electrostrictive Brillouin process: acoustical amplification as is traditional and an electrostrictive Brillouin cooling regime. Cooling is mediated by resonant light in one pumped optical mode, and spontaneously scattered resonant light in one anti-Stokes optical mode, that beat and electrostrictively attenuate the Brownian motion of the mechanical mode.
We investigate the relative phase between two weakly interacting 1D condensates of bosonic atoms after suddenly switching on the tunnel coupling. The following phase dynamics is governed by the quantum sine-Gordon equation. In the semiclassical limit of weak interactions, we observe the parametric amplification of quantum fluctuations leading to the formation of breathers with a finite lifetime. The typical lifetime and density of these "quasibreathers" are derived employing exact solutions of the classical sine-Gordon equation. Both depend on the initial relative phase between the condensates, which is considered as a tunable parameter.
We analyze how to exploit Brillouin scattering of light from sound for the purpose of cooling optomechanical devices and present a quantum-mechanical theory for Brillouin cooling. Our analysis shows that significant cooling ratios can be obtained with standard experimental parameters. A further improvement of cooling efficiency is possible by increasing the dissipation of the optical anti-Stokes resonance.
We investigate the equilibrium behavior of a superconducting circuit QED system containing a large number of artificial atoms. It is shown that the currently accepted standard description of circuit QED via an effective model fails in an important aspect: it predicts the possibility of a superradiant phase transition, even though a full microscopic treatment reveals that a no-go theorem for such phase transitions known from cavity QED applies to circuit QED systems as well. We generalize the no-go theorem to the case of (artificial) atoms with many energy levels and thus make it more applicable for realistic cavity or circuit QED systems.
Recent experimental developments have brought into focus optomechanical systems containing multiple optical and mechanical modes interacting with each other. Examples include a setup with a movable membrane between two end-mirrors and "optomechanical crystal" devices that support localized optical and mechanical modes in a photonic crystal type structure. We discuss how mechanical driving of such structures results in coherent photon transfer between optical modes, and how the physics of Landau-Zener-Stueckelberg oscillations arises in this context. Another area where multiple modes are involved are hybrid systems. There, we review the recent proposal of a single atom whose mechanical motion is coupled to a membrane via the light field. This is a special case of the general principle of cavity-mediated mechanical coupling. Such a setup would allow the well-developed tools of atomic physics to be employed to access the quantum state of the 'macroscopic' mechanical mode of the membrane. (C) 2011 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.
Optomechanical systems couple light stored inside an optical cavity to the motion of a mechanical mode. Recent experiments have demonstrated setups, such as photonic crystal structures, that in principle allow one to confine several optical and vibrational modes on a single chip. Here we start to investigate the collective nonlinear dynamics in arrays of coupled optomechanical cells. We show that such "optomechanical arrays" can display synchronization, and that they can be described by an effective Kuramoto-type model.
Laser light has been used to cool a nanomechanical resonator to its lowest energy state. The result opens the door to testing the principles of quantum mechanics and to applications in quantum information processing.
We consider dephasing by interactions in a one-dimensional chiral fermion system (e.g., a quantum Hall edge state). For finite-range interactions, we calculate the spatial decay of the Green's function at fixed energy, which sets the contrast in a Mach-Zehnder interferometer. Using a physically transparent semiclassical ansatz, we find a power-law decay of the coherence at high energies and zero temperature (T=0), with a universal asymptotic exponent of 1, independent of the interaction strength. We obtain the dephasing rate at T > 0 and the fluctuation spectrum acting on an electron.
We study dephasing by electron interactions in a small disordered quasi-one-dimensional (1D) ring weakly coupled to leads. We use an influence functional for quantum Nyquist noise to describe the crossover for the dephasing time tau(phi)(T) from diffusive or ergodic 1D (tau(-1)(phi)alpha T-2/3,T-1) to zero-dimensional (0D) behavior (tau(-1)(phi)alpha T-2) as T drops below the Thouless energy. The crossover to 0D, predicted earlier for two-dimensional and three-dimensional systems, has so far eluded experimental observation. The ring geometry holds promise of meeting this long-standing challenge, since the crossover manifests itself not only in the smooth part of the magnetoconductivity but also in the amplitude of Altshuler-Aronov-Spivak oscillations. This allows signatures of dephasing in the ring to be cleanly extracted by filtering out those of the leads.
Entangled multiqubit states may be generated through a dispersive collective quantum nondemolition measurement of superconducting qubits coupled to a microwave transmission line resonator. Using the quantum trajectory approach, we analyze the stochastic measurement traces that would be observed in experiments. We illustrate the synthesis of three-qubit W and Greenberger-Horne-Zeilinger states, and we analyze how the fidelity and the entanglement evolve in time during the measurement. We discuss the influence of decoherence and relaxation, as well as of imperfect control over experimental parameters. We show that the desired states can be generated on time scales much faster than the qubit decoherence rates.
We analyze the detection of itinerant photons using a quantum nondemolition measurement. An important example is the dispersive detection of microwave photons in circuit quantum electrodynamics, which can be realized via the nonlinear interaction between photons inside a superconducting transmission line resonator. We show that the back action due to the continuous measurement imposes a limit on the detector efficiency in such a scheme. We illustrate this using a setup where signal photons have to enter a cavity in order to be detected dispersively. In this approach, the measurement signal is the phase shift imparted to an intense beam passing through a second cavity mode. The restrictions on the fidelity are a consequence of the quantum Zeno effect, and we discuss both analytical results and quantum trajectory simulations of the measurement process.
We review recent progress in the field of optomechanics, where one studies the effects of radiation on mechanical motion. The paradigmatic example is in optical cavity with a movable mirror. where the radiation pressure can induce cooling. amplification and nonlinear dynamics of the mirror.
We suggest a straightforward approach to the calculation of the dephasing rate in a fermionic system, which correctly keeps track of the crucial physics of Pauli blocking. Starting from Fermi's golden rule, the dephasing rate can be written as an integral over the frequency transferred between system and environment, weighted by their respective spectral densities. We show that treating the full many-fermion system instead of a single particle automatically enforces the Pauli principle. Furthermore, we explain the relation to diagrammatics. Finally, we show how to treat the more involved strong-coupling case when interactions appreciably modify the spectra. This is relevant for the situation in disordered metals, where screening is important.
We study the quantum measurement of a cantilever using a parametrically coupled electromagnetic cavity which is driven at the two sidebands corresponding to the mechanical motion. This scheme, originally due to Braginsky et al (Braginsky V, Vorontsov Y I and Thorne K P 1980 Science 209 547), allows a back-action free measurement of one quadrature of the cantilever's motion, and hence the possibility of generating a squeezed state. We present a complete quantum theory of this system, and derive simple conditions on when the quantum limit on the added noise can be surpassed. We also study the conditional dynamics of the measurement, and discuss how such a scheme (when coupled with feedback) can be used to generate and detect squeezed states of the oscillator. Our results are relevant to experiments in optomechanics, and to experiments in quantum electromechanics employing stripline resonators coupled to mechanical resonators.
We investigate the time evolution of a charge qubit subject to quantum telegraph noise produced by a single electronic defect level. We obtain results for the time evolution of the coherence that are strikingly different from the usual case of a harmonic-oscillator bath (Gaussian noise). When the coupling strength crosses a certain temperature-dependent threshold, we observe coherence oscillations in the strong-coupling regime. Moreover, we present the time evolution of the echo signal in a spin-echo experiment. Our analysis relies on a numerical evaluation of the exact solution for the density matrix of the qubit.
We present the results of theoretical and experimental studies of dispersively coupled (or 'membrane in the middle') optomechanical systems. We calculate the linear optical properties of a high finesse cavity containing a thin dielectric membrane. We focus on the cavity's transmission, reflection and finesse as a function of the membrane's position along the cavity axis and as a function of its optical loss. We compare these calculations with measurements and find excellent agreement in cavities with empty-cavity finesses in the range 10(4)-10(5). The imaginary part of the membrane's index of refraction is found to be similar to 10(-4). We calculate the laser cooling performance of this system, with a particular focus on the less-intuitive regime in which photons 'tunnel' through the membrane on a timescale comparable to the membrane's period of oscillation. Lastly, we present calculations of quantum non-demolition measurements of the membrane's phonon number in the low signal-to-noise regime where the phonon lifetime is comparable to the QND readout time.
Optomechanical set-ups use radiation pressure to manipulate macroscopic mechanical objects. Two experiments transfer this concept to the fields of superconducting microwave circuits and cold-atom physics.
The destruction of quantum-mechanical phase coherence by a fluctuating quantum bath has been investigated mostly for a single particle. However, for electronic transport through disordered samples and mesoscopic interference setups, we have to treat a many-fermion system subject to a quantum bath. Here, we review a novel technique for treating this situation in the case of ballistic interferometers, and discuss its application to the electronic Mach-Zehnder setup. We use the results to bring out the main features of decoherence in a many-fermion system and briefly discuss the same ideas in the context of weak localization.
Macroscopic mechanical objects and electromagnetic degrees of freedom can couple to each other through radiation pressure. Optomechanical systems in which this coupling is sufficiently strong are predicted to show quantum effects and are a topic of considerable interest. Devices in this regime would offer new types of control over the quantum state of both light and matter(1-4), and would provide a new arena in which to explore the boundary between quantum and classical physics(5-7). Experiments so far have achieved sufficient optomechanical coupling to laser- cool mechanical devices(8-12), but have not yet reached the quantum regime. The outstanding technical challenge in this field is integrating sensitive micromechanical elements ( which must be small, light and flexible) into high- finesse cavities ( which are typically rigid and massive) without compromising the mechanical or optical properties of either. A second, and more fundamental, challenge is to read out the mechanical element's energy eigenstate. Displacement measurements ( no matter how sensitive) cannot determine an oscillator's energy eigenstate(13), and measurements coupling to quantities other than displacement(14-16) have been difficult to realize in practice. Here we present an optomechanical system that has the potential to resolve both of these challenges. We demonstrate a cavity which is detuned by the motion of a 50-nm- thick dielectric membrane placed between two macroscopic, rigid, high- finesse mirrors. This approach segregates optical and mechanical functionality to physically distinct structures and avoids compromising either. It also allows for direct measurement of the square of the membrane's displacement, and thus in principle the membrane's energy eigenstate. We estimate that it should be practical to use this scheme to observe quantum jumps of a mechanical system, an important goal in the field of quantum measurement.
We have explored the nonlinear dynamics of an optomechanical system consisting of an illuminated Fabry-Perot cavity, one of whose end mirrors is attached to a vibrating cantilever. The backaction induced by the bolometric light force produces negative damping such that the system enters a regime of nonlinear oscillations. We study the ensuing attractor diagram describing the nonlinear dynamics. A theory is presented that yields quantitative agreement with experimental results. This includes the observation of a regime where two mechanical modes of the cantilever are excited simultaneously.
We propose a measure for the "size" of a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties by counting how many single-particle operations are needed to map one state onto the other. This definition gives sensible results for simple, analytically tractable cases and is consistent with a previous definition restricted to Greenberger-Horne-Zeilinger-like states. We apply our measure to the experimentally relevant, nontrivial example of a superconducting three-junction flux qubit put into a superposition of left- and right-circulating supercurrent states, and we find the size of this superposition to be surprisingly small.
We review the quantum theory of cooling of a mechanical oscillator subject to the radiation pressure force due to light circulating inside a driven optical cavity. Such optomechanical setups have been used recently in a series of experiments by various groups to cool mechanical oscillators (such as cantilevers) by factors reaching 10(5), and they may soon go to the ground state of mechanical motion. We emphasize the importance of the sideband-resolved regime for ground state cooling, where the cavity ring-down rate is smaller than the mechanical frequency. Moreover, we illustrate the strong coupling regime, where the cooling rate exceeds the cavity ring-down rate and where the driven cavity resonance and the mechanical oscillation hybridize.
We consider a generic optomechanical system, consisting of a driven optical cavity and a movable mirror attached to a cantilever. Systems of this kind (and analogues) have been realized in many recent experiments. It is well known that these systems can exhibit an instability towards a regime where the cantilever settles into self-sustained oscillations. In this paper, we briefly review the classical theory of the optomechanical instability, and then discuss the features arising in the quantum regime. We solve numerically a full quantum master equation for the coupled system, and use it to analyze the photon number, the cantilever's mechanical energy, the phonon probability distribution and the mechanical Wigner density, as a function of experimentally accessible control parameters. When a suitable dimensionless 'quantum parameter' is sent to zero, the results of the quantum mechanical model converge towards the classical predictions. We discuss this quantum-to-classical transition in some detail.
We consider a ballistic Mach-Zehnder interferometer for electrons propagating chirally in one dimension (such as in an integer quantum Hall effect edge channel). In such a system, dephasing occurs when the finite range of the interaction potential is taken into account. Using the tools of bosonization, we discuss the decay of coherence as a function of propagation distance and energy. We supplement the exact solution by a semiclassical approach that is physically transparent and is exact at high energies. In particular, we study in more detail the recently predicted universal power-law decay of the coherence at high energies, where the exponent does not depend on the interaction strength. In addition, we compare against Keldysh perturbation theory, which works well for small interaction strength at short propagation distances.
We describe a scheme for the efficient generation of microwave photon pairs by parametric down-conversion in a superconducting transmission line resonator coupled to a Cooper-pair box serving as an artificial atom. By properly tuning the first three levels with respect to the cavity modes, the down-conversion probability may reach the percentage level at good fidelity. We show this by numerically simulating the dissipative quantum dynamics of the coupled cavity-box system and discussing the effects of dephasing and relaxation in the solid state environment. The setup analyzed here might form the basis for a future on-chip source of entangled microwave photons, e.g., using Franson's idea of energy-time entanglement.
In this work we implement the self-consistent Thomas-Fermi-Poisson approach to a homogeneous two-dimensional electron system. We compute the electrostatic potential produced inside a semiconductor structure by a quantum point contact (QPC) placed at the surface of the semiconductor and biased with appropriate voltages. The model is based on a semianalytical solution of the Laplace equation. Starting from the calculated confining potential, the self-consistent (screened) potential and the electron densities are calculated for finite temperature and magnetic field. We observe that there are mainly three characteristic rearrangements of the incompressible edge states which will determine the current distribution near a QPC.
We present a quantum-mechanical theory of the cooling of a cantilever coupled via radiation pressure to an illuminated optical cavity. Applying the quantum noise approach to the fluctuations of the radiation pressure force, we derive the optomechanical cooling rate and the minimum achievable phonon number. We find that reaching the quantum limit of arbitrarily small phonon numbers requires going into the good-cavity (resolved phonon sideband) regime where the cavity linewidth is much smaller than the mechanical frequency and the corresponding cavity detuning. This is in contrast to the common assumption that the mechanical frequency and the cavity detuning should be comparable to the cavity damping.
This is the first in a series of two papers, in which we revisit the problem of decoherence in weak localization. The basic challenge addressed in our work is to calculate the decoherence of electrons interacting with a quantum-mechanical environment while taking proper account of the Pauli principle. First, we review the usual influence functional approach valid for decoherence of electrons due to classical noise, showing along the way how the quantitative accuracy can be improved by properly averaging over closed (rather than unrestricted) random walks. We then use a heuristic approach to show how the Pauli principle may be incorporated into a path-integral description of decoherence in weak localization. This is accomplished by introducing an effective modification of the quantum noise spectrum, after which the calculation proceeds analogous to the case of classical noise. Using this simple but efficient method, which is consistent with much more laborious diagrammatic calculations, we demonstrate how the Pauli principle serves to suppress the decohering effects of quantum fluctuations of the environment, and essentially confirm the classic result of Altshuler, Aronov, and Khmelnitskii [J. Phys. C 15, 7367 (1982)] for the energy-averaged decoherence rate, which vanishes at zero temperature. Going beyond that, we employ our method to calculate explicitly the leading quantum corrections to the classical decoherence rates and to provide a detailed analysis of the energy dependence of the decoherence rate. The basic idea of our approach is general enough to be applicable to the decoherence of degenerate Fermi systems in contexts other than weak localization as well. Paper II will provide a more rigorous diagrammatic basis for our results by rederiving them from a Bethe-Salpeter equation for the Cooperon.
In a 'controlled dephasing' experiment, an interferometer loses its coherence owing to entanglement of the interfering electron with a controlled quantum system, which effectively is equivalent to path detection. In previous experiments, only partial dephasing was achieved owing to weak interactions between many detector electrons and the interfering electron, leading to a gaussian-phase randomizing process. Here, we report the opposite extreme, where interference is completely destroyed by a few (that is, one to three) detector electrons, each of which has a strong randomizing effect on the phase. We observe quenching of the interference pattern in a periodic, lobe-type fashion as the detector current is varied, and with a peculiar V-shaped dependence on the detector's partitioning. We ascribe these features to the non-gaussian nature of the noise, which is also important for qubit decoherence. In other words, the interference seems to be highly sensitive to the full counting statistics of the detector's shot noise.
A non-perturbative treatment is developed for the dephasing produced by the shot noise of a one-dimensional electron channel. It is applied to two systems: a charge qubit and the electronic Mach-Zehnder interferometer (MZI), both of them interacting with an adjacent partitioned electronic channel acting as a detector. We find that the visibility (interference contrast) can display oscillations as a function of detector voltage and interaction time. This is a unique consequence of the non-Gaussian properties of the shot noise, and only occurs in the strong coupling regime, when the phase contributed by a single electron exceeds p. The resulting formula reproduces the recent surprising experimental observations reported in (I Neder et al 2006 Preprint cond-mat/0610634), and indicates a general explanation for similar visibility oscillations observed earlier in the MZI at large bias voltage. We explore in detail the full pattern of oscillations as a function of coupling strength, voltage and time, which might be observable in future experiments.
This is the second in a series of two papers (Papers I and II) on the problem of decoherence in weak localization. In Paper I, we discussed how the Pauli principle could be incorporated into an influence functional approach for calculating the cooperon propagator and the magnetoconductivity. In the present paper, we check and confirm the results so obtained by diagrammatically setting up a Bethe-Salpeter equation for the cooperon, which includes self-energy and vertex terms on an equal footing and is free from both infrared and ultraviolet divergences. We then approximately solve this Bethe-Salpeter equation by the ansatz (C) over bar (t)=(C) over bar (0)(t)e(-F(t)), where the decay function F(t) determines the decoherence rate. We show that in order to obtain a divergence-free expression for the decay function F(t), it is sufficient to calculate (C) over bar (1)(t), the cooperon in the position-time representation to first order in the interaction. Paper II is independent of Paper I and can be read without detailed knowledge of the latter.
We present a technique for treating many particles moving inside a ballistic interferometer, under the influence of a quantum-mechanical environment (phonons, photons, Nyquist noise, etc.). Our approach is based on solving the coupled Heisenberg equations of motion of the many-particle system and the bath, and it is inspired by the quantum Langevin method known for the Caldeira-Leggett model. As a first application, we treat a fermionic Mach-Zehnder interferometer. In particular, we discuss the dephasing rate and present full analytical expressions for the leading corrections to the current noise, brought about by the coupling to the quantum bath. In contrast to a single-particle model, both the Pauli principle as well as the contribution of hole-scattering processes become important, and are automatically taken into account in this method.
We analyze the nonlinear dynamics of a high-finesse optical cavity in which one mirror is mounted on a flexible mechanical element. We find that this system is governed by an array of dynamical attractors, which arise from phase locking between the mechanical oscillations of the mirror and the ringing of the light intensity in the cavity. We develop an analytical theory to map out the diagram of attractors in parameter space, derive the slow amplitude dynamics of the system, including thermal fluctuations, and suggest a scheme for exploiting the dynamical multistability in the measurement of small displacements.
We investigate the effect of local electron correlations on transport through parallel quantum dots. The linear conductance as a function of gate voltage is strongly affected by the interplay of the interaction U and quantum interference. We find a pair of novel correlation-induced resonances separated by an energy scale that depends exponentially on U. The effect is robust against a small detuning of the dot energy levels and occurs for arbitrary generic tunnel couplings. It should be observable in experiments on the basis of presently existing double-dot setups.
We calculate electron and nuclear spin relaxation rates in a quantum dot due to the combined action of Nyquist noise and electron-nuclei hyperfine or spin-orbit interactions. The relaxation rate is linear in the resistance of the gate circuit and, in the case of spin-orbit interaction, it depends essentially on the orientations of both the static magnetic field and the fluctuating electric field, as well as on the ratio between Rashba and Dresselhaus interaction constants. We provide numerical estimates of the relaxation rate for typical system parameters, compare our results with other, previously discussed mechanisms, and show that the Nyquist mechanism can have an appreciable effect for experimentally relevant systems.
We analyze a model system of fermions in a harmonic oscillator potential under the influence of a dissipative environment: The fermions are subject to a fluctuating force deriving from a bath of harmonic oscillators. This represents an extension of the well-known Caldeira-Leggett model to the case of many fermions. Using the method of bosonization, we calculate one- and two-particle Green's functions of the fermions. We discuss the relaxation of a single extra particle added above the Fermi sea, considering also dephasing of a particle added in a coherent superposition of states. The consequences of the separation of center-of-mass and relative motion, the Pauli principle, and the bath-induced effective interaction are discussed. Finally, we extend our analysis to a more generic coupling between system and bath, which results in complete thermalization of the system.
We study the Mott-insulator transition of bosonic atoms in optical lattices. Using perturbation theory, we analyze the deviations from the mean-field Gutzwiller ansatz, which become appreciable for intermediate values of the ratio between hopping amplitude and interaction energy. We discuss corrections to number fluctuations, order parameter, and compressibility. In particular, we improve the description of the short-range correlations in the one-particle density matrix. These corrections are important for experimentally observed expansion patterns, both for bulk lattices and in a confining trap potential.
We analyze shot noise under the influence of dephasing in an electronic Mach-Zehnder interferometer, of the type that was realized recently [Yang Ji et al., Nature (London) 422, 415 (2003)]. Using a model of dephasing by a fluctuating classical field, we show how the usual partition noise expression T(1-T) is modified. We study the dependence on the power spectrum of the field, which is impossible in simpler approaches such as the dephasing terminal, against which we compare. We remark on shot noise as a tool to distinguish thermal smearing from genuine dephasing.
We present a theoretical study of the influence of dephasing on shot noise in an electronic Mach-Zehnder interferometer. In contrast to phenomenological approaches, we employ a microscopic model where dephasing is induced by the fluctuations of a classical potential. This enables us to treat the influence of the environment's fluctuation spectrum on the shot noise. We compare against the results obtained from a simple classical model of incoherent transport, as well as those derived from the phenomenological dephasing terminal approach, arguing that the latter runs into a problem when applied to shot-noise calculations for interferometer geometries. From our model, we find two different limiting regimes: If the fluctuations are slow as compared to the time scales set by voltage and temperature, the usual partition noise expression T(1-T ) is averaged over the fluctuating phase difference. For the case of "fast" fluctuations, it is replaced by a more complicated expression involving an average over transmission amplitudes. The full current noise also contains other contributions, and we provide a general formula, as well as explicit expressions and plots for specific examples.
We analyze a model system of fermions in a harmonic oscillator potential under the influence of a fluctuating force generated by a bath of harmonic oscillators. This represents an extension of the well-known Caldeira-Leggett model to the case of many fermions. Using the method of bosonization, we calculate Green's functions and discuss relaxation and dephasing of a single extra particle added above the Fermi sea. We also extend our analysis to a more generic coupling between system and bath that results in complete thermalization of the system.
We investigate the inelastic spin-flip rate for electrons in a quantum dot due to their contact hyperfine interaction with lattice nuclei. In contrast to other works, we obtain a spin-phonon coupling term from this interaction by taking directly into account the motion of nuclei in the vibrating lattice. In the calculation of the transition rate the interference of first and second orders of perturbation theory turns out to be essential. It leads to a suppression of relaxation at long phonon wavelengths, when the confining potential moves together with the nuclei embedded in the lattice. At higher frequencies (or for a fixed confining potential), the zero-temperature rate is proportional to the frequency of the emitted phonon. We address both the transition between Zeeman sublevels of a single electron ground state as well as the triplet-singlet transition, and we provide numerical estimates for realistic system parameters. The mechanism turns out to be less efficient than electron-nuclei spin relaxation involving piezoelectric electron-phonon coupling in a GaAs quantum dot.
We analyze dephasing in a model system where electrons tunnel sequentially through a symmetric interference setup consisting of two single-level quantum dots. Depending on the phase difference between the two tunneling paths, this may result in perfect destructive interference. However, if the dots are coupled to a bath, it may act as a which-way detector, leading to partial suppression of the phase coherence and the reappearance of a finite tunneling current. In our approach, the tunneling is treated in leading order whereas coupling to the bath is kept to all orders [using P(E) theory]. We discuss the influence of different bath spectra on the visibility of the interference pattern, including the distinction between "mere renormalization effects" and "true dephasing."
We consider an experimentally relevant model of a geometric ratchet in which particles undergo drift and diffusive motion in a two-dimensional periodic array of obstacles, and which is used for the continuous separation of particles subject to different forces. The macroscopic drift velocity and diffusion tensor are calculated by a Monte Carlo simulation and by a master-equation approach, using the corresponding microscopic quantities and the shape of the obstacles as input. We define a measure of separation quality and investigate its dependence on the applied force and the shape of the obstacles.
We consider the visibility of the Aharonov-Bohm effect for cotunneling transport through a clean one-channel ring coupled to a fluctuating magnetic flux. We concentrate on the modification of the destructive interference at Phi(0)/2 by the fluctuating flux, since changes in the magnitude of the current away from this point can also be caused by renormalization effects and do not necessarily indicate dephasing. For fluctuations arising from the Nyquist noise in an external coil at T = 0, the suppression of the destructive interference shows up only in a contribution proportional to V-3, and therefore does not affect the linear conductance. In this sense, the Nyquist bath does not lead to dephasing in the linear transport regime at zero temperature in our model.
We analyze a model of a nonlinear bath consisting of a single two-level system coupled to a linear bath (a classical noise force in the limit considered here). This allows us to study the effects of a nonlinear, non-Markoffian bath in a particularly simple situation. We analyze the effects of this bath onto the dynamics of a spin by calculating the decay of the equilibrium correlator of the z-component of the spin. The exact results are compared with those obtained using three commonly used approximations: a Markoffian master equation for the spin dynamics, a weak-coupling approximation, and the substitution of a linear bath for the original nonlinear bath.
We consider a noninteracting system of electrons on a clean one-channel Aharonov-Bohm ring that is threaded by a fluctuating magnetic flux. The flux derives from a Caldeira-Leggett bath of harmonic oscillators. We address the influence of the bath on the following properties: one- and two-particle Green's functions, dephasing, persistent current, and visibility of the Aharonov-Bohm effect in cotunneling transport through the ring. For the bath spectra considered here (including Nyquist noise of an external coil), we find no dephasing in the linear transport regime at zero temperature.
We consider a system of two superconducting islands, each of which is coupled to a bulk superconductor via Josephson tunneling. One of the islands represents a "Cooper-pair box," i.e., it is an effective two-level system. The other island has a smaller charging energy and approximates a harmonic oscillator. A capacitive interaction between the islands results in a dependence of the oscillator frequency on the quantum state of the box. Placing the latter in a coherent superposition of its eigenstates and exciting coherent oscillations in the large island will lead to a phase shift of these oscillations depending on the box quantum state, thereby producing a coherent superposition of two "mesoscopically distinct" quantum states in the large island.
We consider a system of two superconducting islands, each of which is coupled to a bulk superconductor via Josephson tunneling. One of the islands represents a "Cooper-pair box", i.e. it is an effective two-level system. The other island has a smaller charging energy and approximates a harmonic oscillator. A capacitive interaction between the islands results in a dependence of the oscillator frequency on the quantum state of the box. Placing the latter in a coherent superposition of its eigenstates and exciting coherent oscillations in the large island will lead to a phase-shift of these oscillations depending on the box quantum state, thereby producing a coherent superposition of two "mesoscopically distinct" quantum states in the large island.
|
0.996921 |
Review your cat’s current diet. I will make suggestions on how to improve it, and recommend supplements, if indicated.
Review yourcat’s current environment to ensure that it’s free from toxins and common psychological stressors.
Discuss your cat’s play and exercise routine, and offer suggestions to improve it, if needed.
Review your cat’s diagnosis and treatment plan and offer support for implementing your veterinarian’s suggestions.
Offer suggestions and tools for caring for a sick cat, both physically/logistically and emotionally.
Provide resources for your cat’s specific health condition, including what alternative treatments may be available for your pet’s condition.
Behavior issues can be challenging to address remotely. I can advise you on basic behavior problems such as cat to cat introductions, how to create a stimulating environment for your cats, and minor aggression issues. I will refer you to a feline behaviorist for more complex problems.
Determine whether it’s time to let your cat go. Euthanasia is never a simple decision, and it’s different for each individual cat and person. I can help you sort through the emotions and the facts surrounding this difficult issue.
Talk through available options for what to do with your cat’s body, and help you find the one that is the best solution for you and your family.
Help you navigate through your grief. Sometimes, being able to talk to someone who has experienced this devastating loss can be very helpful.
For a list of recommended brands, please read The Best Food for Your Cat: My Recommendations. If you’d like me to evaluate a brand not represented on this list, I’d be happy to do so for a $65 fee per formula.
I use PayPal to invoice for all consultation fees. You do not need a PayPal account to pay via PayPal, you can pay with your credit card. The fee of $65 for the first 15 minutes is due prior to the consultation. I bill for any additional time after the consultation. Payment is due upon receipt.
Click here to contact me to schedule a consultation.
Click here for more information about me.
Consultations are not a substitute for veterinary care. Suggestions for diet changes and supplements should be discussed with your pet’s veterinarian.
|
0.9917 |
Glycine and GABA mediate synaptic inhibition in matured circuits. Glycinergic and GABAergic inhibition are attributed predominantly to caudal and rostral brain regions, respectively. Nonetheless, both neurotransmitters coexist throughout the whole brain.
Mixed inhibitory synaptic transmission, with co-release of glycine and GABA from the same presynaptic terminal, takes place in various caudal brain regions, such as auditory brainstem, ventral respiratory group, cerebellum, and spinal cord [1–8]. In more rostral brain regions, like the hippocampus (HC), GABA is utilized for inhibitory synaptic transmission [9, 10], while glycine co-released from glutamatergic terminals can modulate NMDA receptor (NMDAR)-mediated signaling [11, 12]. Accordingly, glycine transporters (GlyTs) and GABA transporters (GATs) are widely expressed in astrocytes and neurons [13–16] to enable neurotransmitter clearance, reuptake, and modulation of neuronal signaling [15, 17, 18]. Astrocytes mainly express GlyT1 (Slc6A9), GAT-1 (Slc6A1), and/or GAT-3 (Slc6A11), which mediate an inward current and concomitant depolarization . In addition, astrocytes can express ionotropic receptors for glycine (GlyRs) and GABA (GABAARs) [20–26].
In a previous study, we analyzed the expression of functional GlyTs and GATs in astrocytes in the lateral superior olive (LSO) – a conspicuous auditory brainstem center whose main inhibitory input is glycinergic after early postnatal development [2, 3]. Astrocytes in this nucleus express functional GlyT1, GAT-1, and GAT-3 . To study the region-dependent heterogeneity of GlyT and GAT expression in astrocytes, we chose two systems that contrast the LSO with respect to the utilization of glycine and GABA for inhibitory synaptic transmission: 1) The inferior colliculus (IC) residing in the midbrain belongs to the rostral part of the auditory brainstem and serves as a major hub for processing auditory cues [4, 27]. Afferents from all auditory brainstem centers converge in the lateral lemniscal tract (LL) and project to the IC (Fig. 1a) [4, 8, 28]. The inhibitory part of the tract consists of glycinergic and GABAergic projections [8, 29–31]. Accordingly, IC astrocytes can be proposed to express GlyTs and GATs to account for neurotransmitter uptake. GlyT1 expression was found in the IC and attributed to glial cells [11, 13, 32]. Likewise, GAT-1 and GAT-3 are present in the IC [33, 34]. However, GlyTs and GATs in IC astrocytes have not yet been electrophysiologically characterized. 2) The HC is the second system of interest. Whereas its main circuitry is glutamatergic [35, 36], inhibitory synaptic transmission arises from GABAergic interneurons [9, 10]. In line with this, astrocytes in the stratum radiatum express GAT-3, whereas GAT-1 has been attributed to interneurons [21, 37]. Glycine is co-released from glutamatergic terminals and modulates NMDAR-mediated signaling [11, 12]. For uptake of released glycine, GlyT1 is expressed in astrocytes and presynaptic terminals [11, 38–40]. However, functionality of GlyT1 in HC astrocytes has not been demonstrated prior to this study.
Here we analyzed the heterogeneity of expression and function of inhibitory neurotransmitter transporters in astrocytes from IC and HC. Using whole-cell patch-clamp recordings from sulforhodamine 101 (SR101)-labeled astrocytes [19, 41, 42] and concomitant application of glycine or GABA to provoke transporter activation, together with single-cell reverse transcription (RT)-PCR, our results demonstrate that all IC astrocytes and about half of the HC astrocytes expressed functional GlyT1, GAT-1, and GAT-3. In contrast, GlyT2, GAT-2, and BGT-1 were never found. From our experiments, we can exclude that transporter currents were contaminated by respective ionotropic receptor-mediated currents. As expected, GAT activity was much stronger in HC astrocytes compared to IC astrocytes. Concurrently, our results show that IC and HC astrocytes exhibit heterogeneous properties, which reflect region-specific adaptation to local circuitry.
We used tissue from C57BL/6 wild type mice of both genders at postnatal days 10-12 for our experiments. Mice were treated in accordance with the German law for conducting animal experiments and the NIH guidelines for the care and use of laboratory animals. Acute coronal slices were retrieved from midbrain and forebrain containing IC and HC, respectively. After decapitation, the brain was quickly transferred into ice-cold cutting solution containing (in mM): 26 NaHCO3, 1.25 NaH2PO4, 2.5 KCl, 1 MgCl2, 2 CaCl2, 260 D-glucose, 2 Na-pyruvate, and 3 myo-inositol, pH 7.4, bubbled with carbogen (95% O2, 5% CO2). 270 μm thick slices were cut using a vibratome (VT1200 S, Leica). Thereafter, slices were transferred to artificial cerebrospinal fluid (ACSF) containing (in mM): 125 NaCl, 25 NaHCO3, 1.25 NaH2PO4, 2.5 KCl, 1 MgCl2, 2 CaCl2, 10 D-glucose, 2 Na-pyruvate, 3 myo-inositol, and 0.44 ascorbic acid, pH 7.4, bubbled with carbogen. Slices were incubated for 30 min at 37 °C in 0.5-1 μM SR101 and washed for another 30 min at 37 °C in SR101-free ACSF. This resulted in reliable labeling of astrocytes as shown before [19, 41]. Thereafter, slices were kept at room temperature (20-24 °C). All chemicals were purchased from Sigma-Aldrich or AppliChem, if not stated otherwise.
Whole-cell patch-clamp experiments were done as described before . Briefly, the recording chamber was placed at an upright microscope equipped with infrared differential interference contrast (Eclipse FN1, Nikon, 60× water immersion objective, N.A. 1.0) and an infrared video camera (XC-ST70CE, Hamamatsu). Voltages and currents were recorded using a double patch-clamp EPC10 amplifier and PatchMaster software (HEKA Elektronik). The patch pipettes were pulled from borosilicate glass capillaries (GB150(F)-8P, Science Products) using a horizontal puller (P-87, Sutter Instruments). Pipettes had a resistance of 3-7 MΩ using an intracellular solution containing (in mM): 140 K-gluconate, 5 EGTA (glycol-bis(2-aminoethylether)-N,N′,N′,N′-tetraacetic acid), 10 Hepes (N-(2-hydroxyethyl)piperazine-N′-2-ethanesulfonic acid), 1 MgCl2, 2 Na2ATP, and 0.3 Na2GTP, pH 7.30. In some experiments the intracellular solution contained biocytin (0.3%, Biomol) or alexa fluor (AF) 568 (100 μM, Invitrogen) to allow the postfixational reconstruction of IC and HC neurons, respectively. Biocytin was labeled with NeutrAvidin-horseradish peroxidase conjugate (1:1000; Invitrogen) .
Astrocytes and neurons in the central nucleus of the IC (Fig. 1a; Additional file 1: Figure S1A 1 ) and CA1 region of the HC (Fig. 1d; Additional file 1: Figure S1A 2 ) were clamped to a holding potential (EH) of −85 mV and −70 mV, respectively. The cells were hyper- and depolarized using a standard step protocol ranging from −150 to +50 mV with 10 mV increments. The resulting current traces were sampled at 50 kHz. We performed a standard leak subtraction protocol (p/4) to isolate currents mediated by voltage-activated channels. Four step protocols were executed repetitively that comprised a reduced step size of 25%. Thereafter, the recorded current traces were add together and subtracted from the initial recording (Fig. 1c1-2, f1-2, Additional file 1: Figure S1B).
Glycine and GABA, both 1 mM in ACSF, were applied in two ways: 1) In experiments analyzing the maximal neurotransmitter-induced current and depolarization (Figs. 2 and 4) both transmitters were administered using a peristaltic pump (Reglo, Ismatec) at a rate of 1-2 ml/min. Data were sampled at 100 Hz. We monitored putative changes of membrane resistance (RM) and series resistance (RS) every 30 s (≙ 0.033 Hz) using hyperpolarizing test pulses (ΔU = 5 mV) . The resulting currents were sampled at 20 kHz. 2) When pharmacologically isolating transporters mediating membrane currents (Figs. 3a and 5a) neurotransmitters were applied via focal pressure injection (PDES-2 T, NPI; 12 psi). Therefore, a pipette with a resistance of 3-7 MΩ was filled with glycine or GABA and positioned approximately 20 μm apart from the recorded cell . Membrane currents were sampled at 1 kHz. In order to detect additionally short-lasting receptor-mediated changes in membrane conductance during focal application of neurotransmitters (Figs. 3c and 5 c, Additional file 2: Figure S2B-E), hyperpolarizing test pulses were applied at 1 Hz (Additional file 2: Figure S2A) and RM of astrocytes and neurons was calculated . All recordings were low-pass filtered at 2.9 kHz. Data were processed and analyzed using “IGOR Pro 6.2” software (Wavemetrics). Measurements were rejected if RS exceeded 15 MΩ.
The patch pipette was filled with 3 μl of intracellular solution. Next, astrocytes were patch-clamped as described in the preceding paragraph. After determination of the I-V relationship, the cytoplasm was sucked into the patch pipette, which was then retracted from the slice. The remaining cell parts were sucked into the patch pipette and the intracellular solution containing the cytoplasm was put into a 50 μl PCR reaction tube containing 3 μl of diethyl pyrocarbonate (0.1%)-treated water (ThermoFisher Scientific). To avoid degradation by RNAse activity the sample was immediately frozen in liquid nitrogen and stored at −80 °C. Samples were rejected, if the patch was unstable during cell extraction or fragments from neighboring cells stuck at the pipette.
For transcription of mRNA into cDNA reverse transcriptase (SuperScript III, 100 U; ThermoFisher Scientific), RNAse inhibitor (RNAseOUT, 40 U; ThermoFisher Scientific), random hexamers (50 μM, ThermoFisher Scientific), first-strand buffer (ThermoFisher Scientific), and dithiothreitol (DTT; 10 mM; ThermoFisher Scientific) were added to the frozen sample (total volume: 13 μl). Next, RT was performed for 1 h at 37 °C. Subsequently, a multiplex PCR was performed to identify transcripts of inhibitory neurotransmitter transporters. MPprimer software was used to create primer sequences (Table 1). Primers were chosen to be located on different exons. Thus, amplification of DNA, which contains exons and introns, would result in larger product length compared to the amplicon of spliced mRNA that could be distinguished after gel electrophoresis. The PCR reaction mix contained: 5× PCR buffer including dNTPs (50 μM; Bioline), Taq Polymerase (4 U, Bioline), 200 nM primers (Eurofins Scientific), 10 μl of the RT reaction product, H2O (ad 50 μl, Ampuwa, Fresenius Kabi). Fifty PCR cycles were performed: denaturation for 25 s at 94 °C, annealing for 2 min (first 5 cycles) and 45 s (subsequent 45 cycles) at 51 °C, and elongation for 25 s at 72 °C. Afterwards, a second PCR with nested primers and 40 cycles was conducted: denaturation for 25 s at 94 °C, annealing for 2 min (first 5 cycles) and 45 s (subsequent 35 cycles) at 54 °C, and elongation for 25 s at 72 °C. The second PCR reaction mix contained Platinum Taq Polymerase (1 U, ThermoFisher Scientific), 10× PCR buffer (MgCl2-free; ThermoFisher Scientific), 2.5 mM MgCl2 (ThermoFisher Scientific), 50 μM dNTPs (Bioline), nested primers (200 nM, Eurofins Scientific), and 2 μl of the first PCR reaction product.
Positive controls were performed with mRNA extracted from mouse brainstem by using an mRNA extraction kit (Dynabeads mRNA Purification Kit, Invitrogen; Additional file 3: Figure S3). For negative controls, the patch pipette was positioned close to the tissue in the recording chamber and ACSF was sucked into the pipette. Subsequently, the probe was frozen in liquid nitrogen and used for RT-PCR (Additional file 3: Figure S3). All amplified PCR products were loaded on an agarose gel (1.5%), labeled with 1% ethidium bromide (Carl Roth), and analyzed using a transilluminator (Biometra TI 1). To determine the PCR product length we used a standard DNA ladder (HyperLadder 50 bp, Bioline).
Initial experiments showed that some HC astrocytes were devoid of any target RNA (GlyTs or GATs). To prove successful RNA extraction from HC astrocytes transcripts for the inwardly rectifying K+ (Kir) channel 4.1 were detected, which are present in all HC astrocytes .
The labeling with SR101 - used for a priori identification of IC and HC astrocytes - and AF568 was documented as described before using a confocal microscope (Leica TCS SP5 LSM: HC PL FLUOTAR 10 × 0.30 DRY; HCX PL APO Lambda blue 63 × 1.4 OIL UV) and LAS AF software. Fluorophores were detected as follows (excitation wavelength/filtered emission wavelength): SR101 (SP5: 561 nm/580-620 nm) and AF568 (561 nm/580-620 nm). To improve the quality of confocal micrographs and reduce background fluorescence, we used a Kalman filter (averaging of four identical image sections). Images were processed using Fiji software .
Results were statistically analyzed using WinSTAT (R. Fitch Software). Data were tested for normal distribution with Kolmogorov-Smirnov test. In case of normal distribution, results were assessed by one-tailed, paired or non-paired Student’s t-tests. In the absence of a normal distribution, results were assessed by Wilcoxon test for paired or U-test (Mann-Whitney) for non-paired data. P represents the error probability, *P < 0.05, **P < 0.01, ***P < 0.001; n represents the number of experiments or cells/slices/animals. In case of multiple comparisons data were statistically analyzed by the tests described above under post hoc Šidák correction of critical values : two comparisons: Fig. 2a3, Fig. 4a3, Table 2; *P < 0.025, **P < 0.005, ***P < 0.0005; three comparisons: Fig. 2c3, Fig. 4c3, Additional file 2: Figure S2B 3 -E 3 , Table 3; *P < 0.017, **P < 0.0033, ***P < 0.0003. Data are provided as mean ± SEM.
SR101 labeling is used in many different brain regions to identify astrocytes in acute tissue slices [19, 41, 42]. We mentioned before that incubation of acute slices with SR101 results in labeling of putative astrocytes in the IC, but we did not electrophysiologically confirm the identity of these SR101+ cells . In IC and HC (CA1, stratum radiatum), SR101-labeled cells comprised a small sized soma with several branching processes. In HC, these cells exhibited strong branching, whereas in IC they appeared to be less complex (Fig. 1b, e). The SR101-labeled cells exhibited membrane properties of classical astrocytes, i.e. a highly negative membrane potential (EM: IC: -84.2 ± 0.3 mV, n = 207/116/101; HC: -81.8 ± 0.4 mV, n = 109/83/36) and a low RM (IC: 8.3 ± 0.7 MΩ, n = 207/116/101; HC: 9.7 ± 0.6 MΩ, n = 109/83/36). Due to the presence of voltage-activated outward currents, non-passive and passive astrocytes were identified (IC: 28%/72%, n = 207/116/101, HC: 59%/41%, n = 109/83/36, Fig. 1c, f), which is typical for that developmental stage.
Astrocytes in many brain regions express GlyTs , whereas GlyRs are only rarely present [20, 22]. To analyze the expression of functional GlyTs in IC and HC astrocytes, we first characterized the response of membrane current and potential upon glycine bath application. The wash-in caused a reversible glycine-induced inward current (IGly) that usually peaked within the first minute (IGly (max): IC: 173 ± 28 pA, n = 13/11/10; HC: 141 ± 20 pA, n = 10/9/6; P = 0.200; Fig. 2a). Upon prolonged glycine administration, IGly partially recovered to a newly formed steady-state level in some recordings. Similarly, glycine induced a reversible depolarization (ΔEM (Gly): IC: 3.0 ± 0.8 mV, n = 8/8/7; HC: 2.6 ± 0.5 mV, n = 6/6/5; P = 0.375; Fig. 2b). To prove whether IGly and ΔEM (Gly) are mediated by GlyT1, we focally applied glycine in the absence and presence of sarcosine (Fig. 3a1-2). The competitive GlyT1 agonist itself caused an inward current by activation of the transporter and subsequently competed with applied glycine . Sarcosine reduced IGly (max) by about 60-70% (IC: 59 ± 2%, n = 12/5/5, P < 0.001; HC: 70 ± 4%, n = 4/4/2, P = 0.015; Fig. 3b3) showing the presence of functional GlyT1 in IC and HC astrocytes.
As the inhibition of IGly (max) was incomplete and GlyT2 was occasionally reported to be present in astrocytes [40, 49], we analyzed transcripts for GlyTs in single astrocytes. GlyT1 mRNA was detected in all IC astrocytes and about half of the HC astrocytes (IC; n = 6/2/2; HC: n = 9/2/2; Fig. 3b). GlyT2 was never found in astrocytes but in the positive control (Additional file 3: Figure S3).
Interestingly, we never observed a glycine-induced outward current or changes in RM (tested every 30 s ≙ 0.033 Hz) upon the activation of putatively expressed GlyRs (not shown). However, glycine-induced outward currents and RM changes upon short term activation of GlyRs during the first seconds of glycine wash-in might be overlooked due to the relatively slow exchange of ACSF in the recording chamber and concomitant slow rise of the neurotransmitter concentration in combination with receptor desensitization . Thus, bath application of glycine is not a suitable approach to prove the presence of functional GlyRs. Therefore, we designed a new protocol for fast and focal pressure injection of neurotransmitters in combination with a voltage-clamp protocol including a higher frequency of test pulses assessing RM changes now at 1 Hz (Additional file 2: Figure S2A).
We first assessed the suitability of this protocol on IC and HC neurons. Bipolar shaped IC neurons and CA1 pyramidal cells (Additional file 1: Figure S1A) expressed time- and voltage-dependent inward and outward currents, respectively (Additional file 1: Figure S1B). Upon focal glycine application, IC neurons and CA1 pyramidal cells exhibited a transient, fast declining outward current (Additional file 2: Figure S2B 1 , D 1 ). This was paralleled by an increase in the offset current induced by the test pulses resembling a strong reduction of RM (IC: t1 − 97.9 ± 0.5%, n = 11/4/4, P < 0.001; HC: t1 -41.2 ± 6.2%, n = 4/2/2, P = 0.004; Additional file 2: Figure S2B 2-3 , D 2-3 ). In the prolonged presence of glycine, RM of IC neurons recovered partially, whereas RM of CA1 pyramidal cells recovered completely (IC: t10: −90.9 ± 2.5% of resting RM, P < 0.001 compared to t0; P = 0.004 compared to t1; HC: t10: −4.5 ± 7.0% of resting RM, P = 0.285 compared to t0; P = 0.006 compared to t1). Both cases indicate desensitization of GlyRs (Additional file 2: Figure S2B 3 , D 3 ), as previously reported for neurons in both regions [50, 51].
Subsequently, we used the focal application protocol on IC and HC astrocytes. Glycine induced an inward but no outward current at any time point during the 10 s application (Fig. 3c1-2). Furthermore, the offset current induced by the test pulses did not change. At t1 (1 s after glycine application), RM was not reduced (IC: +8.4 ± 3.6%, n = 15/4/4, P = 0.018; HC: +2.9 ± 4.0%, n = 6/5/3, P = 0.252; Fig. 3c3). Thus, RM was glycine-independent arguing against an activation of GlyRs. Taken together, IC and HC astrocytes expressed functional GlyT1, whereas GlyRs were only present in IC and HC neurons. Data are summarized in Tables 2 and 3.
GATs are present in astrocytes of various brain regions [15, 16]. Here, we analyzed the expression of different functional GATs in IC and HC astrocytes. GATs and GABAARs mediate – under our experimental conditions – an inward and outward current, respectively. The wash-in of GABA induced a transient inward current (IGABA) that peaked usually within the first minute (IGABA (max); Fig. 4a1-2). Notably, IGABA (max) was larger in HC astrocytes (IC: 327 ± 35 pA, n = 14/11/10; HC: 504 ± 67 pA, n = 7/7/5; P = 0.009; Fig. 4a3). Upon prolonged application, IGABA recovered occasionally to a lower steady-state level in some recordings. Similar to IGABA, GABA induced a reversible depolarization (IC: ΔEM (GABA): 7.2 ± 0.8 mV, n = 7/7/7; HC: ΔEM (GABA): 10.6 ± 1.6 mV, n = 7/7/5; P = 0.041; Fig. 4b). Both IGABA and ΔEM (GABA) indicate the presence of functional GATs. Noticeably, GABA-induced transients were 2-4-fold larger than the above-described glycine-induced transients. In addition, this difference was more prominent in HC astrocytes than in IC astrocytes (Table 2).
To assess the different GAT isoforms being expressed in IC and HC astrocytes, we focally applied GABA and analyzed the sensitivity of IGABA to the non-competitive GAT-1 and GAT-3 antagonists NO711 and SNAP5114, respectively (Fig. 5a1-2). The two antagonists reduced IGABA (max) by about 20-40% (IC: NO711: 19 ± 4%, n = 4/4/2, P = 0.007; SNAP5114: 22 ± 4%, n = 4/4/4, P = 0.003; HC: NO711: 28 ± 6%, n = 8/8/5, P < 0.001; SNAP5114: 43 ± 6%, n = 7/7/4, P = 0.001; Fig. 5a3) showing the presence of functional GAT-1 and GAT-3. NO711 and SNAP5114 themselves had no effect on the membrane current. Simultaneously inhibiting GAT-1 and GAT-3 led to an incomplete reduction of IGABA (max) (Fig. 5a1-2). This can either result from a low antagonist concentration that was chosen in order to ensure specificity of the substances or from the presence of further GATs, i.e. GAT-2 (Slc6A13) and BGT-1 (Slc6A12) . The latter case was addressed analyzing transcripts for the four cloned GATs in single astrocytes. All tested IC astrocytes exhibited transcripts for GAT-1 and GAT-3, whereas these transporters are present in about half of the HC astrocytes. It has to be pointed out that transcripts for GAT-1 and GAT-3 were found in 3/7 HC astrocytes, whereas they were not detected in 3/7 cases. In those cells transcripts for Kir4.1 were found, which proved successful RNA extraction. One HC astrocyte expressed only transcripts for GAT-1. However, transcripts for GAT-2 and BGT-1 were only detected in the positive control (Additional file 3: Figure S3), but not in individual IC or HC astrocytes (IC: n = 3/2/2; HC: n = 7/6/5; Fig. 5b).
Astrocytes, for example in the HC, express GABAARs [21, 22]. Accordingly, IGABA (max) might be underestimated if GABAAR-mediated Cl− influx causes an outward current that counteracts the GAT-mediated inward current. To verify this, we performed fast and focal pressure injection of GABA and assessed R M changes.
Positive controls on GABAAR expressing IC and HC neurons [30, 50, 52] showed the suitability of the experimental configuration to reveal GABAAR activation. Focal GABA application induced a transient, fast declining outward current (Additional file 2: Figure S2C 1 , E 1 ). This was paralleled by strong increase in the offset current induced by the test pulses resembling an eminent reduction of RM (IC: t1: −98.9 ± 0.1%, n = 11/6/6, P < 0.001; HC: t1: −73.5 ± 5.7%, n = 5/3/2, P < 0.001; Additional file 2: Figure S2C 2-3 , Additional file 2: Figure S2E 2-3 ). In the prolonged presence of GABA, RM of IC neurons recovered partially indicating minimal desensitization of GABAARs (t10: −96.4 ± 0.4% of resting RM, P < 0.001 compared to t0; P < 0.001 compared to t1; Additional file 2: Figure S2C 3 ) as previously reported . In contrast, RM of CA1 pyramidal cells recovered completely indicating strong desensitization of GABAARs (t10: +2.8 ± 9.4% of resting RM, P = 0.250 compared to t0; P = 0.003 compared to t1; Additional file 2: Figure S2E 3 ) as previously reported [53, 54].
Subsequently, we did focal application and analyzed putative GABAAR-mediated RM changes in IC and HC astrocytes (Fig. 5c). At any time, GABA induced an inward but no outward current (Fig. 5c1-2). In IC astrocytes, the offset current induced by the test pulses did not change (Fig. 5c1). Accordingly, at t1RM was not reduced (+1.1 ± 4.4%, n = 15/5/5, P = 0.403; Fig. 5c3). Thus, RM was GABA-independent arguing against an activation of GABAARs. In HC astrocytes however, GABA increased the offset current in response to the test pulses (Fig. 5c 2 ). In turn, RM was reduced (t1: −7.6 ± 1.1%, n = 8/5/2, P < 0.001; Fig. 4c3) demonstrating activation of GABAARs in HC astrocytes. In the prolonged presence of GABA, RM recovered completely, indicating desensitization of GABAARs (t10: −2.4 ± 1.3% of resting RM, P = 0.055 compared to t0; P < 0.001 compared to t1; Fig. 5c3). Thus, IGABA (max) was not contaminated by GABAAR activation as it was not determined within the first 10 s of GABA wash-in. Taken together, IC and HC astrocytes co-expressed functional GAT-1 and GAT-3, whereas GABAARs were only found in HC astrocytes. Data are summarized in Tables 2 and 3.
As we observed that the IGABA (max)/IGly (max) ratio was larger in HC compared to IC (Table 2) we speculated about putative additional differences between those nuclei regarding transporters kinetics. Thus, we analyzed rise time (10 - 90%) and decay time (90 - 10%) of IGly and IGABA in IC and HC astrocytes resulting from focal application of glycine and GABA (Fig. 6a1, b1). The rise time of IGly was much shorter in HC astrocytes (IC: 1.32 ± 0.08 s, n = 12/5/5; HC: 0.70 ± 0.15 s, n = 8/7/4; P = 0.002; Fig. 6a2). Additionally, the decay time of IGly was shorter in HC astrocytes, too (IC: 11.83 ± 0.83 s, n = 12/5/5; HC: 8.35 ± 0.64 s, n = 8/7/4; P = 0.002; Fig. 6a3). Together, HC astrocytes exhibited faster kinetics for IGly.
Similarly, we analyzed the kinetics of IGABA (Fig. 6b). Here, IC astrocytes exhibited a much shorter rise time (IC: 1.15 ± 0.10 s, n = 8/8/8; HC: 2.29 ± 0.27 s, n = 19/19/6; P < 0.001; Fig. 6b2). The decay time of IGABA was not different between IC and HC astrocytes (IC: 12.29 ± 0.68 s, n = 8/8/8; HC: 11.17 ± 0.54 s, n = 19/19/6; P = 0.123; Fig. 6b3). Taken together, our data demonstrate that transporter-mediated currents were heterogeneous with respect to glycine and GABA as well as the brain region. HC astrocytes exhibited faster IGly kinetics, whereas IC astrocytes partially showed faster IGABA kinetics. Data are summarized in Table 4.
In summary, our results show that SR101-labeled cells in the IC and HC exhibited properties of classical astrocytes. In all IC and about half of the HC astrocytes, GlyT1, GAT-1, and GAT-3 were present, whereas GlyT2, GAT-2, and BGT-1 were not found. In both regions, astrocytes exhibited a stronger GAT than GlyT activity. However, in HC astrocytes the IGABA (max)/IGly (max) ratio was remarkably higher. In comparison to IC astrocytes HC astrocytes showed faster kinetics for the transport of glycine and slower kinetics for the transport of GABA. Finally, GlyRs could not be detected in astrocytes of IC and HC. However, expression of GABAARs was heterogeneous – it was found in HC but not in IC astrocytes.
In the present study, we investigated the expression and function of GlyTs and GATs in astrocytes from IC and HC. In both regions, astrocytes generally expressed the three inhibitory neurotransmitter transporters GlyT1, GAT-1 and GAT-3, whereas GlyT2, GAT-2, and BGT-1 were not detected. Remarkably, IC astrocytes exhibited larger IGly (max) and smaller IGABA (max) compared to HC astrocytes. In turn, this resulted in a higher IGABA (max)/IGly (max) ratio in HC astrocytes.
Astrocytes were labeled with SR101, by which classical astrocytes in acute tissue slices – containing the superior olivary complex (SOC) or the HC – can be identified [19, 41, 42, 55]. We mentioned before that in the IC SR101 labels small sized and highly branched cells . However, their identity was not verified yet by electrophysiological recordings. Here we show that SR101-labeled IC cells exhibit a highly negative EM and a low RM. They are not NG2 glia as these exhibit completely different electrophysiological properties, i.e. a more positive EM, a tremendously higher RM, and currents through voltage-activated sodium channels [42, 56–58]. Furthermore, they are unlikely to be oligodendrocytes as they are if at all just weakly labeled by SR101 . In contrast, SR101-labeled IC cells exhibited a non-linear or linear current-voltage relationship corresponding to non-passive and passive astrocytes, respectively, which are found throughout the auditory brainstem (Fig. 1) [19, 41, 55, 59, 60]. Furthermore, these from now on IC astrocytes-termed cells were distributed homogeneously within the nucleus (Fig. 1) like astrocytes in SOC nuclei [19, 41]. HC astrocytes exhibited properties like reported in previous studies, e.g. [42, 56].
Glycine and GABA activate respective transporters that mediate an inward current and a concomitant depolarization due to their stoichiometry: 1 glycine/1 GABA: 2 Na+: 1 Cl− [15, 17]. Both inward current and depolarization sometimes partially recovered in the prolonged presence of the agonist (Figs. 2 and 4). This was observed before in LSO astrocytes and may be due to a reduced driving force . Both IC and HC astrocytes showed sarcosine-sensitive IGly (max), demonstrating the presence of functional GlyT1 (Fig. 3). Sarcosine is a competitive agonist and therefore inhibited just about 60-70% of IGly (max) . Thus, the co-expression of the neuron-typical GlyT2 could not be excluded per se. GlyT2 was reported to be present occasionally in astrocytes [40, 49]. However, here we never found transcripts for GlyT2 in IC and HC astrocytes indicating the absence of GlyT2. GlyT1 mRNA was present in all IC astrocytes sufficiently explaining IGly. However, GlyT1 transcripts were found only in about half of the HC astrocytes (Fig. 3). There are several possible explanations: 1) Although the scRT-PCR reliably detected GlyT1 transcripts in the positive controls, it was eventually not sensitive enough to detect single transcripts in all HC astrocytes. 2) There is effectively a mosaic expression of GlyT1. However, all recorded HC astrocytes exhibited an IGly (Fig. 2). Thus, HC astrocytes putatively express further transporters that are capable to transport glycine. The neutral amino acid transporter ASCT2 (Slc1A5) as well as sodium-coupled neutral amino acid transporters (system N) SNAT3 (Slc38A3) and SNAT5 (Slc38A5) are expressed by astrocytes and transport glycine, but are electroneutral and accordingly do not generate currents [62–65]. 3) HC astrocytes are extensively coupled [66–69] and allow direct electrical communication between neighboring astrocytes [70–72]. Here, about half of the HC astrocytes lacked GlyT1 expression, but can be expected to be surrounded by and coupled to GlyT1 expressing HC astrocytes. Therefore, GlyT1 negative astrocytes might indirectly experience IGly.
Likewise, IC and HC astrocytes exhibited NO711- and SNAP5114-sensitive IGABA (max) showing the co-expression of functional GAT-1 and GAT-3 in both regions (Fig. 5). Hitherto, in the HC GAT-1 and GAT-3 were attributed to interneurons and astrocytes, respectively [21, 37]. To our surprise, we found prominent expression of functional GAT-1 in HC astrocytes. NO711 and SNAP5114 inhibited IGABA (max) by about 20 to 40% (Fig. 5), which is similar to our former study on LSO astrocytes . However, simultaneous administration of NO711 and SNAP5114 did not completely abolish IGABA (max) (Fig. 5). Both antagonists dose-dependently inhibit respective GATs . As we here used a low drug concentration to retain the specificity of GAT inhibitors it was not expected to achieve a complete blockage. However, to that point our data did not exclude the possibility of co-expression of further GATs, such as GAT-2 or BGT-1. The latter are predominantly found at the meninges and neuronal somata, respectively . In accordance, we found only transcripts for GAT-1 and GAT-3 but not for GAT-2 and BGT-1 in IC and HC astrocytes. These results indicate that IGABA was solely mediated by GAT-1 and GAT-3 (Fig. 5). Surprisingly, GAT-1 and GAT-3 mRNA exhibited a mosaic pattern in HC astrocytes. In 3/7 cases HC astrocytes did not exhibit transcripts for any GAT. There are two possible explanations: 1) Although the scRT-PCR detected transcripts in the positive controls, it was not sensitive enough to detect single transcripts on the single cell level. 2) There is effectively a mosaic expression pattern. However, the second explanation contrasts with the finding that all HC astrocytes exhibited IGABA that was always sensitive to the GAT-1 and GAT-3 inhibitor NO711 and SNAP5114, respectively (Figs. 4 and 5). Again, the extensive coupling of and direct electrical communication between HC astrocytes [66–72] could explain why IGABA was recorded in all cells independent from GAT expression.
The co-expression of GlyTs and GATs in the same astrocyte raises the question of transporter interference. Such interference of different transporters was seen before [6, 73, 74]. In a previous study on LSO astrocytes, we could show that GlyT and GAT activity influence each other . The reciprocal reduction of activity likely refers to changes in their commonly used gradients for Na+ and Cl−. Those gradients become weakened upon transporter activation thereby reducing the driving force for the transport. Especially in the IC, where neurons simultaneously receive glycinergic and GABAergic synaptic inputs [29, 30], transporter interference might occur during synchronous activation of astrocytic GlyTs and GATs . However, it remains to be elucidated to which extend this interplay takes place and how altered neurotransmitter clearance putatively modulates neuronal signaling [15, 17, 18].
Taken together, all IC and about half of the HC astrocytes expressed functional GlyT1, GAT-1, and GAT-3. In this respect, these astrocytes can express the same combination of inhibitory neurotransmitter transporters like astrocytes located in LSO, thalamus, cortex or Bergmann glia in the cerebellum or Müller cells in the retina [15, 19, 75–80]. The potentially heterogeneous expression in HC astrocytes could be indicative of functional domains in which glycinergic transmission arising from excitatory projections and GABAergic transmission from interneurons are segregated from each other.
Both glycine and GABA act on respective transporters and ionotropic receptors. While activation of GlyTs and GATs by exogenous accessible neurotransmitters necessarily causes an inward current, activation of GlyRs and GABAARs can result in either an inward current or an outward current. The underlying Cl− efflux or influx depends on [Cl−]i and subsequently on ECl. Under physiological conditions astrocytic [Cl−]i amounts to about 30 mM causing an inward current and concomitant depolarization upon receptor activation. However, our pipette solution contained 2 mM Cl− and receptor activation would have caused an outward current. In our recordings, we never observed glycine- or GABA-induced outward currents in IC and HC astrocytes (e.g. Figs. 2 and 4), which was surprising as at least HC astrocytes express functional GABAARs [21, 22]. Two possible scenarios could explain this discrepancy: 1) The GABAAR-mediated outward current was too small and consecutively masked by the large GAT-mediated inward current. This in turn would suggest that the amount of GAT-mediated inward current would be underestimated. 2) GABAARs rapidly desensitize [53, 54, 82]. In combination with slow wash-in of GABA in our experiments, this early desensitization might hamper the accurate detection of GABAAR activation. To answer the question of masked activation and/or desensitization of ionotropic receptors we measured RM changes that could result from increased membrane permeability (see Methods). Proof of principle experiments on GlyR and GABAAR expressing IC and HC neurons validated the method (Additional file 2: Figure S2). Our results convincingly demonstrated the capability to detect RM changes upon GlyR and GABAAR activation with the utilized test pulse protocol.
With this tool at hand, we were able to detect GABAAR activation in HC astrocytes (Fig. 5). GABAAR activation was detected by temporary RM reduction that vanished within 10 s indicating receptor desensitization. However, we never observed any outward current that had to arise from Cl− influx due to the low [Cl−]i of the intracellular solution. We reason that any small Cl− influx-mediated outward current is instantly masked by strong electrogenic transporter current. Nonetheless, IGABA (max), which was measured earliest after 10 s, was not contaminated by GABAAR-mediated currents. The RM reduction in HC astrocytes was rather small (~8%) compared to HC neurons (~74%). Astrocytes express various K+ channels that are constitutively open at resting conditions (inwardly rectifying K+ channels, two-pore-domain K+ channels) [46, 83]. In turn, these channels cause the very high K+ conductance observed in astrocytes . Accordingly, it is not surprising that the RM reduction was relatively small. At the same time, IC astrocytes exhibited no RM reduction upon GABA application (Fig. 5). Thus, either GABAARs are absent or their amount is essentially not high enough to be relevant. Interestingly, using this method on LSO astrocytes we detected a small RM reduction indicating the presence of GABAARs (Vanessa Augustin and Simon Wadle, unpublished). We previously reported that IGABA in LSO astrocytes mainly constitutes of GAT-mediated current . Similar to HC astrocytes, the GABA-induced RM reduction in LSO astrocytes vanished within 10 s after the beginning of GABA application. Thus, our previously reported IGABA (max) in LSO astrocytes was not contaminated by GABAAR activation.
Similarly, we used the same method to examine a possible influence of GlyR activation onto our recorded IGly (max). We could show that neither IC nor HC astrocytes exhibited glycine-induced RM changes or outward currents (Figs. 2 and 3). Likewise, LSO astrocytes lack glycine-induced RM changes (Vanessa Augustin and Simon Wadle, unpublished). Accordingly, functional GlyRs appear to be absent in those astrocytes. This is consistent with the observation that GlyRs were described only in astrocytes located in most caudal brain regions, i.e. spinal cord and caudal brainstem (ventral respiratory group) [20, 24, 25]. However, this contrasts with the wide distribution of GABAARs throughout the brain . In summary, IGly (max) and IGABA (max) were not affected by GlyRs and GABAARs, respectively, and the transporter currents were accordingly not underestimated.
IC and HC astrocytes are differently capable to take up glycine and GABA (Table 2). While there is no statistical difference for glycine transport among the two brain regions, the GABA transport is stronger in HC astrocytes. In the LSO, which is located more caudal compared to IC and HC, astrocytes exhibit a similar capability to take up glycine. However, their ability for GABA clearance is much lower . Thus, astrocytic IGABA (max) increases from caudal to rostral brain regions (LSO < IC < HC). Consequently, the ratio of IGABA (max)/IGly (max) is elevated in more rostral brain regions (HC (3.6) > IC (1.9) > LSO (1.6; data from )). This was expected, as the need to take up GABA rather than glycine is higher in rostral brain regions, which arises from the glycine-to-GABA shift as the predominant inhibitory neurotransmitter [2, 3, 9, 10, 29, 30]. Noticeably, GlyT-mediated IGly (max) substantially persists in HC astrocytes. This allows the clearance of glycine that is co-released from excitatory presynaptic terminals [11, 12]. Taken together, IGly (max) is similar in the three brain regions, whereas IGABA (max) as well as the IGABA (max)/IGly (max) ratio are region-dependent and increases with the prevalence of GABA as inhibitory neurotransmitter.
Beside inter-region differences of amplitudes, we additionally found region-dependent alterations of the kinetics of transporter-mediated currents (Table 4). Whereas IC astrocytes exhibit similar kinetics for the transport of glycine and GABA, HC astrocytes are marked by faster glycine and slower GABA transport. However, LSO astrocytes generally outperform IC and HC astrocytes regarding kinetics of GlyTs (rise time: 1.05 ± 0.18 s; decay time: 4.88 ± 1.11 s; n = 6/6/6) and GATs (rise time: 0.61 ± 0.13 s; decay time: 4.52 ± 0.52 s; n = 12/12/11; data from ). GlyTs and GATs can be modulated by several mechanisms: e.g., enhancement of transporter activity can be achieved by transporter glycosylation and [Ca2+]i elevation [84–86], whereas decrease of transporter activity can be caused by activation of protein kinase C and de-glycosylation [84, 85, 87–89]. If one or more of those mechanisms are relevant in astrocytes of the three brain regions, is yet unexplored. However, the different transport kinetics correlate with different precision of signal processing in those three brain regions. The auditory system in general requires temporal precise coding to compute correctly e.g. interaural time and level differences in the medial superior olive and the LSO, respectively, and synapses show relatively weak depression allowing high rates of synaptic transmission [4, 90–92]. Furthermore, the synaptic signaling in the LSO is considerably faster and more precise compared to the hippocampus . Like the LSO, the IC belongs to the auditory brainstem. However, it is not used for sound source localization, but serves as an information hub. Thus, the IC can tolerate a slower and less precise synaptic transmission. As the rate of neurotransmitter transporter activity determines the extent of synaptic transmission [17, 18], the fast transmitter uptake into LSO astrocytes to terminate quick synaptic transmission is in favor of fast and precise neuronal signaling. In contrast, synaptic transmission in IC and HC is not as precise and neurotransmitter uptake is not that fast. Thus, our data suggest that expression and kinetics of astrocytic inhibitory neurotransmitter transporters are adjusted to the requirements of local circuitry.
In summary, our results demonstrate the expression of functional GlyT1, GAT-1, and GAT-3 in all IC astrocytes and about half of the HC astrocytes. In both regions the activity of GATs is stronger compared to the activity of GlyTs. Whereas IGly (max) is comparable in both regions, IGABA (max) is much larger in HC astrocytes. Accordingly, the IGABA (max)/IGly (max) ratio is markedly elevated in HC astrocytes. Furthermore, astrocytic GlyTs and GATs in IC as well as HC exhibit slower transporter kinetics in comparison to those transporters in LSO astrocytes, thereby reflecting the regionally differing demands for temporal precision of synaptic transmission. Altogether, our results show that astrocytes do not uniformly express inhibitory neurotransmitter transporters, but region specifically adapt to the requirements of local circuitry.
We thank Jennifer Winkelhoff and Ayse Maraslioglu for excellent technical assistance.
This study was supported by the German Research Foundation (DFG Priority Program 1608 “Ultrafast and temporally precise information processing: Normal and dysfunctional hearing”, Ste. 2352/2-1), the Nachwuchsring of TU Kaiserslautern, and the University of Milan funding the internship of EG.
JS designed experiments and figures. EG, VA, SLW, JB, and SB performed experiments and analyzed data. GS helped to establish single-cell RT-PCR. JS wrote the manuscript. EG, SLW, SB, JH, and GS contributed to the writing. All authors read and approved the final manuscript.
Mice were treated in accordance with the German law for conducting animal experiments and the NIH guidelines for the care and use of laboratory animals.
|
0.999708 |
(1) Nowadays the Internet technology is changing rapidly, and the demand of users is not invariable. The flexibility of the incremental model can be adapted to this change much better than the waterfall model and the rapid prototyping model. And most companies are not able to make full-featured software at once. Therefore, the use of incremental model to do development is very consistent with the software development trend.
(2) Software development is now more and more rapid, the first to develop a core function of the software to quickly occupy the market, so that customers will soon have their own user volume, occupy a portion of the market.
(3) At the same time also can enhance the user and the developer, the customer and the user's communication, in order to wedge the customer's software function goal, thus also impels the software to update quickly.
(4) Gradually increase the product function can allow users more time to learn and adapt to new software products, thereby reducing a new software may bring impact to the customer organization, so that customers and users have great benefits, users can adapt to new pieces more quickly, customers also get more users to use the software.
In summary: Most Internet startups use incremental models to develop software that is beneficial and harmless.
|
0.999998 |
Context: The assessment of left ventricular ejection fraction (LVEF) is the most important component in prediction and detection of cardiotoxicity in patients undergoing cancer chemotherapy. LVEF may not be sensitive enough to pick the cardiotoxicity early since drop in LVEF occurs in the last and irreversible stage. A 10%–15% early reduction in global longitudinal strain (GLS) by speckle tracking echocardiography proposed to be the earliest indicator of myocardial dysfunction. Aims: The aim of this study was to compare the early detection of cardiotoxicity (at 0 and 3 months) using drop in LVEF with two-dimensional echocardiography (2DE), three-dimensional echocardiography (3DE), and GLS techniques. Settings and Design: This was a prospective cohort study of patients attending cardiooncology clinic in a tertiary care institute. Subjects and Methods: Newly diagnosed 75 cases of cancer of various etiologies, for whom cardiotoxic chemotherapy drugs has to be used, were included from January 2016 to June 2016. Statistical Analysis Used: Data were analyzed with Pearson's Chi-square test, mean, standard deviation, and 95% confidence interval. Results: A total of 17 (22.6%) subjects out of 75, had drop in LVEF by GLS (<−18.9%) as compared to 5 (6.6%) in 2DE and 7 (9.3%) in 3DE at 3 months with statistically significant P values (P = 0.0001). In the 17 subjects who had significant fall in GLS at 3 months, the mean GLS was −16.17 ± 1.55% with a significant reduction of 13.48% from baseline. Conclusion: Reduction in GLS preceded decrease in ejection fraction. Early detection allows modification of chemotherapeutic regimens and medical intervention preventing the irreversible cardiac damage.
Context: Drug interactions are more common in cancer patients because they consume several medicines such as hormonal substances, anticancer drugs, and adjuvant drugs to treat comorbidities. Objectives: To assess the pattern of potential drug–drug interactions (pDDIs) in an oncology unit of a tertiary care teaching hospital. Materials and Methods: A prospective observational study was carried out for 8 months (August 2016 to March 2017). Data on drugs were collected by reviewing the patients' medical records. The drug interactions fact software version such as Micromedex electronic database system, drugs.com interaction checker, and Medscape multidrug interaction checker tool were used to identify and analyze the pattern of pDDIs. Results: A total of 180 patients were enrolled during the study period. Among them, 152 study patients had 84.44% of pDDIs. Male predominance (64.4%) was noted over female (35.6%). According to the severity of classification of pDDIs, majority of them were moderate (63.1%) followed by major (26.1%) and minor (10.1%) interactions. The interactions that potentially cause QT interval prolongation and irregular heartbeat were the common outcomes of pDDIs. Conclusions: The incidence of pDDIs among cancer patients was 84.44%. The most common interacting drug pair in the study population was found to be dexamethasone + aprepitant [41 (26.9%)] followed by cisplatin + dexamethasone [32 (21.05%)] and other interacting pairs. To avoid harmful effects, screening of pDDIs should take place before administering the therapy.
Objective: To study and compare the national and regional incidences and risk of developing of neoplasms of individual urogenital sites using 2012 – 2014 reports from the National Cancer Registry Programme (NCRP) data. Materials and Methods: A number of incident cases, age-adjusted rates (AARs), and cumulative risk (0 – 64 years) pertaining to urogenital neoplasms, along with the ICD-10 codes, were extracted. Data on indicators, namely number of incident cases, AARs and one in a number of persons develop cancer were summarized for both the sexes in each of the cancer registries and presented region-wise in the form of ranges. Results: The proportion of all urogenital neoplasms in comparison to all cancers was 12.51% in women and 5.93% in men. Risk of development of urogenital cancers for women was maximum (1 in 50) in the North-eastern region, followed by Rural West, South, and North. For men, the risk of developing neoplasms of urogenital sites was highest (1 in 250). For the neoplasms of the renal pelvis and ureter, both the incidence and risk were quite low for all genders across all the regions. Cervical neoplasms had the highest incidence (4.91 – 23.07) among female genital neoplasms, while prostate had the highest incidence (0.82 – 12.39) among male genital neoplasms. Conclusion: Making people aware of urogenital neoplasms and their risk factors are important for the public health awareness point of view. Centers that deal with either management of urogenital cases or/and screening of genital neoplasms could serve as the designated centers for creating such awareness.
Background: Globally, India has a high burden (20%) of oral cancer with 1% prevalence of premalignant lesions. Most cases are attributed to modifiable risk factors such as substance abuse (tobacco and alcohol), dietary deficiencies, and environmental exposures (solar radiation and air pollution) aggravated by delayed detection and care especially in rural areas. Objective: The objective of the study was to study the risk factors of oral cancer pathogenesis among the rural residents of Jodhpur, India, through opportunistic oral screening approach at primary care facilities. Methodology: An unmatched case–control study was done at two randomly chosen rural health centres in Jodhpur, India. A total of 84 cases and 168 controls were included during 6 months study period (2016). Randomly selected outpatient department attendees were interviewed and screened for oral cancer and premalignant lesions. A structured questionnaire interview along with comprehensive oral, head and neck examination was conducted. Data were analyzed using multivariate logistic regression, and confidentiality of data was maintained. Results: The majority of the study participants were rural residents (82.9%) with poor socioeconomic status. Opportunistic oral screening revealed a variety of cancerous and precancerous lesions. Most common case pathologies were submucosal fibrosis (40.5%), inadequate mouth opening (35.7%), cheek bites (28.6%), leukoplakia (23.8%) etc. Multivariate analysis suggested that tobacco intake (adjusted odds ratio = 13.6, P ≤ 0.01) dietary deficiency (7.4, <0.01), oral sepsis (7.0, <0.01), oral lesions (6.8, <0.01), and sun radiation exposure (9.5, <0.01) were significantly associated with oral cancer pathology. Conclusion: The study provides strong evidence that tobacco, dietary deficiency, oral sepsis and lesions, and sun radiation exposure are independent risk factors for oral cancer. It also reiterates the importance and application of opportunistic oral cancer screening at primary care level.
Context: Head-and-neck cancers (HNCs) are most common cancer in Indian cancer registries. However, there is a huge variation and heterogeneity in use of different types of smokeless tobacco (SLT) consumption across India. Aims: The aims and objectives of this study were to investigate how different types of SLT use are distributed across Indian states and examined its association with different subsites of HNC incidence rates. Settings and Design: Ecological analysis of correlation between SLT prevalence and incidence rates from population-based cancer registries. Methods: Incidence data was extracted from population-based cancer registries report from the National Cancer Registry Programme database 2012–2014. The current SLT uses the prevalence of all Indian States and Union territories from Global Adult Tobacco Survey 2009–2010. Statistical Analysis Used: Pearson's correlation coefficient was used to estimate an ecological correlation between the prevalence of types of SLT uses in different region of India and age-adjusted incidence rate of different subsites of HNC. Results: In our brief analysis, we found a significant correlation between certain types of SLT use and subsite of HNC. Betel quid and tobacco use are correlated (r = 0.53) with oropharynx cancer incidence. Khaini use is correlated with hypopharynx cancer incidence (r = 0.48). Gutka use is correlated with mouth cancer incidence (r = 0.54). Oral tobacco is correlated with mouth cancer incidence (r = 0.46). Other SLT use is correlated for hypopharynx cancer incidence (r = 0.47). Conclusions: The variations in SLT use across Indian states account for differences in incidence rates of HNC subsites across the states. The inferences from this brief analysis can be used as a base to modify and design observational epidemiological studies in the future.
Background: Tobacco has been the arch criminal of most head-and-neck cancers in the world. Many laws have been implemented to control this menace, but still this slow poison persists. Effectiveness of these laws has always been a matter of concern to the authorities. The present study was conducted to observe the compliance of Cigarettes and Other Tobacco Products Act (COTPA) among public places, educational institutions, and tobacco vendors in Bengaluru city. Methodology: A cross-sectional, observational study was done to assess the violations at public places, educational institutions, and tobacco vendors. Violations for Sections 4, 5, 6, and 7 of COTPA were assessed from 25 each of these places in the eight zones of Bengaluru city. The study areas were chosen by convenience sampling method, and using a questionnaire, the violations were recorded. Data were analyzed in Microsoft Excel to find out the percentage of violations. Results: The COTPA Sections 4 and 5 violation was 134 (67%) and 94 (47%), respectively. A total of 124 (62%) of the educational institutions had tobacco vendors within 100 yards, and only 30 (15%) had signboard for the prohibition of tobacco use. Around 14 tobacco vendors had beedis without proper pictorial warning with them which violated Section 7 of COTPA. Conclusion: For proper implementation of COTPA laws, we should create awareness about the laws, what amounts to violations and also the health hazards to tobacco use among general population. The law enforcing personnel should act on those who violate the law. There is a need for a sensitization workshop and advocacy for all the stakeholders.
Context: Different schedules of concurrent chemotherapy with definitive radiotherapy in locally advanced carcinoma cervix. Aims: The aim is to evaluate toxicity, compliance, and response of weekly versus tri-weekly cisplatin given concurrently with radiotherapy in locally advanced squamous cell carcinoma cervix. Subjects and Methods: One hundred and ten newly diagnosed histopathologically confirmed squamous cell carcinoma cervix patients with International Federation of Gynecologists and Oncologists stage IIB to IVA were randomly distributed among study group receiving 75 mg/m2 of cisplatin every 3 weeks for three cycles and control group receiving 40 mg/m2 of weekly cisplatin for six cycles. Results: Patients in both the arms tolerated treatment well. At the time of completion of chemoradiotherapy, 83.63% of patients of the study group and 80% of the control group had a complete response whereas 16.37% of study and 20% of the control group had a partial response, both statistically insignificant (P > 0.05). Compliance was similar in both the groups. The average time to complete radiotherapy was 54.63 days in the study group and 51.34 days in the control group. In the study group, 87.27% of patients completed all cycles of tri-weekly chemotherapy, whereas, in control group, 80% completed all 6 cycles of weekly chemotherapy. The difference was not statistically significant (P = 0.30). Toxicity in terms of vomiting, grade 3–4 leukopenia and neutropenia were more in the study group which was statistically significant (P < 0.001, P = 0.04, and P = 0.03, respectively). Conclusions: Although the 3-weekly cisplatin schedule has longer intervals and sounds convenient, the weekly cisplatin regime shows lower hematologic toxicity with similar disease response and compliance.
Aim: This study was planned to research the relationship between doxorubicin cardiomyopathy and the soluble Fas (sFas) level. Materials and Methods: Two groups of rats were included in the study. The control group was given physiological saline, while the study group was given doxorubicin. The rats, whose blood samples were taken weekly, were sacrificed and their myocardial tissues were removed. The tissues were examined in terms of morphological changes and surface Fas expression, while the blood samples were examined in terms of sFas level. Results: In the study group, the sFas levels at 2nd–9th weeks were higher than those found at 1st week before administrating the drug, and the increase at 2nd–7th weeks was meaningful. In addition, sFas levels were gradually increased each week during 1st–5th weeks when compared to the values of a previous week, and the increase during the first 4 weeks was meaningful. After the 5th week, the values gradually decreased each week. The mean values of the study group at 1st–8th weeks were higher than those of the control group, and the increases at 2nd–8th weeks were meaningful. The severe forms of interfibrillar hemorrhage, vascular dilatation, myocardial necrosis, inflammatory infiltration, and splitting of muscle fibers occurred with 15, 15, 17.5, 20, and 22.5 mg/kg dose of medicine, respectively. Conclusions: As the tissue injury increased, the increasing cell-surface Fas expression and sFas plasma level at the acute phase of doxorubicin-related cardiotoxicity decreased. The sFas level determined at acute phase may be helpful in predicting the existing injuries and possible late-term problems.
Background: Pediatric gliomas comprise a clinically, histologically, and molecularly heterogeneous group of central nervous system tumors. The survival of children with gliomas influenced by histologic subtype, age, and extent of resection. Tumor grade emerged as the most determinant of survival except in the young age groups. The aim of this study was to evaluate the role of multidisciplinary therapeutic approach including surgery and chemotherapy, and their impact on the outcome in pediatric patients with low-grade glioma (LGG). Procedure: Study patients were prospectively enrolled onto the study. All patients were below 18-year-old, diagnosed as LGG between July 2007 and June 2012. Upfront surgical resection was attempted in all tumors other than optic pathway sites. Systemic chemotherapy was given according to CCG-A9952 protocol. Results: Total/near-total resection in 105/227 (46.3%) without adjuvant treatment, while 49/227 patients (21.5%) underwent subtotal tumor resection followed by chemotherapy for big residual (n = 26). Follow-up only was indicated for asymptomatic/small residual (n = 23). The radiological diagnosis was set in 18/227 (7.9%) patients; 13/18 had optic pathway glioma. The 3-year overall survival (OS) was 87.3% versus 65.5% event free survival (EFS) for the whole study patients with a follow-up period of 1–5 years. The OS and EFS for patients who did surgery with no adjuvant treatment (n = 128) were, respectively, 95.2% and 77.3% versus 87.4% and 65.1% for adjuvant chemotherapy group (n = 99); (P = 0.015 and P = 0.016 for OS and EFS, respectively). Conclusion: Pediatric LGGs comprise a wide spectrum of pathological and anatomical entities that carry a high rate of prolonged survival among children and adolescents. Surgical resection is the mainstay of treatment in most of tumors. Combined chemotherapy can be an acceptable alternative when surgery is not safely feasible.
The study on liver cancer has been performed in clinical medicine and medical science for a long time. Within the few recent years, there are many new emerging biomedical technologies that help better assess on the liver cancer. Of several new technologies, the advanced cell technologies for the assessment of liver cancer, organoids technology is very interesting. In fact, the organoids is an advanced cell research technique that can be useful for studying of many medical disorders. Organoids can be applied for study on the pathophysiology of many cancers. The application for studying on liver cancer is very interesting issue in hepatology. In this short article, the author summarizes and discusses on applied organoids technology for studying on various kinds of liver cancers. The application can be seen on primary hepatocellular carcinoma, metastatic cancer, cholangiocarcinoma, hepatoblastoma, as well as other rare liver cancers.
Myeloproliferative neoplasms (MPNs) are clonal disorders, derived from abnormal hematopoietic stem cells and result in an excessive production of blood cells. This MPN group of conditions encompasses different diseases with overlapping clinical and biologic similarities. The majority of the conventional therapies of MPN are palliative in nature. However, with the discovery of Janus Kinase 2 (JAK2) mutation and development of targeted JAK1/2 inhibition therapy, the therapeutic options in treatment landscape have changed dramatically. This article presents the revised Indian MPNs Working Group consensus recommendations. It highlights and brings into attention about the recent findings that have defined the state of the art of the diagnosis and therapy in the MPN area, including identification of the new driver and prognostic mutations, treatment goals in the management of myelofibrosis and polycythemia vera (PV), role of the recently approved, targeted tyrosine kinase inhibitor ruxolitinib in PV, and special issues such MPN consideration in patients with splenic vein thrombosis and the management of the disease in pregnancy.
Tablet Pazopanib known to cause Hypo pigmentation and Hyperpigmentation as per various literature reports. We report here a case of reversible hypopigmentation with Pazopanib in a patient treated for spindle cell sarcoma. Patient did not have any clinical symptoms except for cosmetic significance.
Prostate carcinoma is the second most common cancer among men worldwide. Although prostate carcinoma is common, its presentation resembling retroperitoneal fibrosis is uncommon. We report a patient with prostate carcinoma mimicking retroperitoneal fibrosis. An elderly male presenting in a volume overload state with features of obstructive uropathy was diagnosed as a case of prostate carcinoma. Magnetic resonance imaging was suggestive of retroperitoneal fibrosis. The presentation of prostate carcinoma as retroperitoneal fibrosis is rare.
Spontaneous pneumomediastinum and subcutaneous emphysema in the neck, axilla, and chest do not commonly occur after neoadjuvant cisplatin/etoposide chemotherapy, followed by radiotherapy, and adjuvant cisplatin/etoposide chemotherapy in patients with olfactory neuroblastoma. There are few case reports of pneumomediastinum induced by and occurring during bleomycin/etoposide/cisplatin chemotherapy in testicular cancer. The present case differs from the previous cases in that our patient developed spontaneous pneumomediastinum and subcutaneous emphysema in the neck, axilla, and chest approximately 2 months after completion of chemoradiotherapy for olfactory neuroblastoma. These conditions may have been treatment induced or caused by breath-holding after forceful inspiration. The latter would have created a massive pressure gradient between the alveoli and surrounding structures, causing alveolar rupture, and subsequent passage of air into the mediastinum and subcutaneous tissue of the neck, axilla, and chest.
Epithelioid sarcoma (ES) first described by Enzinger in 1970, is a rare soft-tissue sarcoma typically presenting as a subcutaneous or deep dermal mass in distal portions of the extremities of adolescents and young adults. They are frequently mistaken for ulcers, abscesses, or infected warts that fail medical management. Patients often develop multiple local recurrences of long duration, with subsequent metastases in 30%–50% of cases. We here report a case of left thumb ES that presented as an ulcer and subsequently metastasized to the forearm, arm, axillary lymph nodes, and lungs.
A 12-year-old girl presented with intra-abdominal mass and cushingoid features. On investigation, she was diagnosed as a case of functioning adrenocortical carcinoma. Two cycles of neoadjuvant chemotherapy followed by excision of mass with right nephrectomy was performed. On 6-month follow-up, recurrence and metastasis were identified which were managed with surgery and chemotherapy.
Neuroendocrine carcinoma (NEC) of the uterine cervix is an uncommon aggressive tumor, comprises <5% of the cervical malignancies. Undifferentiated carcinoma of the cervix, described by the World Health Organization Classification of Tumors as a distinct entity, is extremely rare with no histologic evidence of differentiation. Immunohistochemistry with p63 and neuroendocrine markers helps in delineating the type as undifferentiated squamous cell carcinoma or NEC. NEC of the uterine cervix behaves biologically like any other cervical malignancy, has an association with human papillomavirus, and is similar to NEC of any possible site with early metastasis and poorer survival. We present a case of 45-year-old premenopausal female with undifferentiated carcinoma of the uterine cervix originating from small-cell NEC proven by the presence of differentiated component in the proximity of undifferentiated tumor and by the immunohistochemistry.
Small cell carcinoma of prostate is a neuroendocrine tumor of prostate seen in 0.5%–2% of men with carcinoma prostate. Prostate-specific antigen (PSA) is a common tumor marker which is often raised in prostatic carcinoma. However, prostatic carcinoma can progress with normal or low serum PSA levels at the time of diagnosis. Carcinoembryonic antigen (CEA) is a tumor marker of different carcinomas. Small cell carcinoma prostate is a highly aggressive tumor which can progress with normal or low serum PSA levels and raised CEA levels. We report a case of 65-year-old male with enlarged prostate with extra-prostatic spread, hepatic metastases, metastatic retroperitoneal and pelvic lymph nodes, osteoblastic metastasis in lumbar spine with normal serum PSA, and raised CEA levels. Prostatic biopsy was suggestive of small cell carcinoma.
An elderly male patient presented with cholestatic jaundice and weight loss. On evaluation, he was found to have left renal mass and hepatomegaly. Diagnosis of Stauffer's syndrome was confirmed based on his clinical history, biochemical evaluation, and liver biopsy. Resolution of jaundice was noted after removal of the renal mass.
Cytomegalovirus (CMV) retinitis is usually diagnosed in patients with acquired immunodeficiency syndrome and in solid organ and hematopoietic stem cell transplant recipients. It produces a characteristic necrotizing retinitis which is a sight-threatening condition in these patients. CMV retinitis occurs rarely in patients undergoing only chemotherapy, and very few cases have been reported during the maintenance phase of acute lymphoblastic leukemia (ALL) in children. We report two patients, one with ALL and the other with Burkitt's lymphoma on HyperCVAD chemotherapy developing CMV retinitis during the course of treatment. Both patients were treated with intravenous ganciclovir, oral valganciclovir and intravitreal ganciclovir. Both patients are alive in remission at 60 and 40 months, respectively, with preservation of normal vision.
|
0.966151 |
As I stated in my write up of Disney Magic Castle: My Happy Life, 2013 is looking to be one of the busiest years in Disney gaming yet. It seems that every few weeks one Disney game is announced that gets people excited at the possibilities, and this is without taking into consideration that the Electronic Entertainment Expo (E3) has yet to start! Not only do we have Disney Infinity headlining the original Disney games, we have DuckTales Remastered representing the classic games of yesteryear brought back in a modern way. Recently, yet another game has been announced, this time along the lines of DuckTales Remastered: a classic game remade for a new generation. This game happens to be none other than Sega's Castle of Illusion: Starring Mickey Mouse.
Last year, I wrote about the classic Sega Genesis game, and mentioned how it is seen as not just one of the best games on the system, but also one of the best Mickey Mouse games ever made. In actuality the game is a standard 2D platformer, but one done with a lot of care and detail in its design. It felt like a magical Disney game, one that seemed ripped right out of a Disney animated classic. Such is the legacy of this game and several follow up games were made, and was the starting point for last year's Epic Mickey: The Power of Illusion for the 3DS, continuing the story where it left off.
Very little is known about the game at the moment, except that it will be an HD 3D remake handled by Sega's Australia studio, that will retain the same elements that made the first game a classic, and that it will be released this summer as a downloadable title for the major systems. "If you have played the original game, you will also see that we have kept intact many of the major iconic elements of the game that helped define this groundbreaking game at the time of its original release," Sega senior digital brand manager Mai Kawaguchi said when the game was finally confirmed.
Much like DuckTales Remastered, it does open up a lot of questions, like will these two games be the first of many classic Disney games re-released as downloadable titles, reboots and remakes? There are many games that have been hailed as some of the best ever made, and what was once considered old is new again in many a gaming circle. Many young Disney games grew up without experiencing these games firsthand, and introducing them for the first time not only keeps the legacy of the originals alive, it may inspire other companies to invest on brand new Disney games.
Stay tuned as more news about Castle of Illusion is unveiled.
|
0.999991 |
Please help me with these so I can make sure to study the right info.
An insurance premium of $18,000 was prepaid in 2007 covering the years 2007, 2008, and 2009. The prepayment was recorded with a debit to insurance expense. In addition, on December 31, 2008, fully depreciated machinery was sold for $9,500 cash, but the sale was not recorded until 2009. There were no other errors during 2008 or 2009 and no corrections have been made for any of the errors. Ignore income tax considerations. 57. What is the total net effect of the errors on Rensing's 2008 net income?
2. Which of the following is accounted for as a change in accounting principle?
a. increase in balance of deferred tax asset minus the increase in balance of deferred tax liability.
b. increase in balance of deferred tax liability minus the increase in balance of deferred tax asset.
c. increase in balance of deferred tax asset plus the increase in balance of deferred tax liability.
d. decrease in balance of deferred tax asset minus the increase in balance of deferred tax liability.
a. a balance in the Unearned Rent account at year-end.
b. using accelerated depreciation for tax purposes and straight-line depreciation for book purposes.
c. a fine resulting from violations of OSHA regulations.
d. making installment sales during the year.
a. the establishment of a deferred tax liability.
b. the establishment of a deferred tax asset.
c. the establishment of an income tax refund receivable.
d. only a note to the financial statements.
7. What is the amount of the deferred tax liability at the end of 2008?
8. Assuming that income tax payable for 2009 is $96,000; the income tax expense for 2009 would be what amount?
|
0.954155 |
I’m working on an interesting project right now: moving away from a marketing automation system. The plan is to go back to using only Salesforce.com with some cheap add-on tools for email, form submission and data quality. Smart or foolish? I’d love to have your input on the potential pitfalls (and benefits) of this approach.
The company in question has used a comprehensive marketing automation system for about 2 years. In the early days it was used to sift through hundreds of new B2B leads per day to identify the valuable leads. This changed over time: now the focus has shifted to pro-active outreach to a handful of executives, instead of targeting thousands of software developers. In addition to cost savings, the thinking is that a full-blown marketing automation system just makes less sense with the new strategy.
How to Replace a Marketing Automation System?
My first reaction was: no way, you should not want to do without any type of marketing automation system (for simplicity sake, I use this term as synonymous to demand generation and lead management). However, when I started looking into Salesforce.com and the wide variety of add-ons, I was less convinced. The Salesforce.com database has some big issues (e.g. the split between Leads and Contacts), but many 3rd party tools are addressing these weaknesses.
What is easy to replace?
Email marketing that integrates with Salesforce.com is provided by many vendors, like VerticalResponse, Boomerang, ExactTarget, Genius, Lyris and more. There are also some relatively affordable registration form vendors, like FormAssembly and OnDialog. Basic lead scoring features are built into Salesforce.com, and data quality tools are available from vendors like Ringlead, CRM Fusion and Datatrim. Notifications of companies visiting your website are available from Leadlander, Netfactor, LEADSExplorer and DemandBase. You can create reports and dashboards in Salesforce.com to provide analytics. So there are lots of useful add-ons available at a nominal price.
Some Email Service Providers can send email on behalf of the record owner or can handle drip-campaigns, but those are exceptions and you sometimes pay quite a bit more for these advanced features. Unsubscribe handling is typically done via a generic page, rather than via branded page.
If you use a basic form vendor, you have to manually map the fields, and put the form on a landing page yourself. You may want to pre-fill the form, or send a thank-you email or the start of an email drip campaign: this is not always possible. Also, some form vendors are not able to append to existing records (resulting in duplicates) or to link new registrations to a Salesforce.com campaign.
Lead scoring based on attributes (e.g. job title) is built into Salesforce.com, but that does not include activity-based scoring, such scoring based on website visitors, clicks on links in emails or form submissions.
Even though you can get reports on anonymous visitors via stand-alone tools, it’s much more work to set up notifications of website visits by known users, and even more challenging to sync that information with Salesforce.com.
Then there are specific usage scenarios that are automated in a marketing automation system, such sending a reminder to non-registrants for an event: with the new approach this needs to be done manually, which takes a lot more time.
Most marketing automation systems replicate the Salesforce.com database with their own database: in the new situation everything is stored in Salesforce.com (or at least: that’s the goal). That is great for manageability, but – if you have the habit of qualifying leads before sending them to the CRM system – you now have a database full with unqualified leads.
This project is still in the planning phase, so I’m still compiling a list of all the pros and cons. One thing is sure: in the new situation the monthly cost will be about $200, down from well over a thousand dollars. That is a significant savings.
But how much more time will it cost to manage the new situation? Are there specific features that create revenue, but simply cannot be implemented with the new approach. What is your take on this?
This entry was posted in Demand Generation, Email Marketing, Landing Pages & Forms, Lead Monitoring, Sales Force Automation and tagged boomerang, CRM Fusion, Datatrim, demandbase, ExactTarget, formassembly, genius, leadlander, LEADSExplorer, Lyris, Netfactor, ondialog, Ringlead, salesforce.com, VerticalResponse on March 22, 2009 by Jep Castelein.
Sales 2.0: Also for Marketing?
Coworker: [confused look] Why are you going to a sales conference?
Coworker: Oh really? That’s interesting.
So many people still think that “Sales 2.0” is only about sales. Not surprising, as it says “sales” and does not mention marketing.
The reality is different: successful implementation of Sales 2.0 requires close collaboration between sales and marketing. For example, David Solinger explained that Ariba now has precise metrics how many leads they need to close a specific amount of business. That is only possible when sales and marketing work closely together.
I stopped by at Marketo‘s booth and had a nice chat with Deanna Deary (Sales) and Kelly Abner (Marketing Director) and asked them about their take. They see marketing & sales as a single revenue cycle. And with better tools (like Marketo) there is better insight in the revenue that marketing influences: so rather than seeing marketing as a cost center, it actually brings in money.
Marketo’s Kelly mentioned that the first Sales 2.0 conference had a lot of “marketing bashing”. That has changed: today’s conference has a dedicated marketing session, and dozens of marketing people are attending.
Tom McCleary of GroupSwim sees the same trend: “marketing and sales need to be in lockstep, and the feedback needs to be instantaneous”. GroupSwim provides online collaboration software that results in better alignment of sales & marketing teams, regardless of the location of these teams.
Another trend is a change in people: I’ve seen traditional marketing VPs who do not like to be pinned down on a specific lead goals. They think it’s better to keep the goals vague, and focus on lead quantity rather than quality. Traditional Sales VPs then complain about the marketing leads and try to find ways to become self-sufficient and generate their own leads.
As Sales 2.0 is changing to a collaborative model, different skills and priorities are needed. For marketing specifically, I think we need more analytical skills: people who are not focused on pretty images, but on setting up efficient processes, with metrics to support this.
This analytical marketer is hard to find: I’ve been told that the best Eloqua sales rep is also placing demand-gen specialists with new Eloqua clients, to ensure that they have the skills need to make “Marketing 2.0” a success.
There are books and conferences on Sales 2.0, but – even though marketing is mentioned – are primarily about sales. But for successful implementation of Sales 2.0 you need both sales and marketing, and marketing seems to be behind.
How can we get more exposure for the role of marketing in Sales 2.0?
This entry was posted in Demand Generation, Sales Force Automation and tagged collaboration, marketing, sales, sales 2.0 on March 4, 2009 by Jep Castelein.
|
0.964803 |
What would be the interest of a foreign bank in taking over an existing Mozambican bank, or even of setting up a local bank? Mozambique is a tiny market compared to Portugal, South Africa or Malaysia, so there are few obvious short term profits to be made from normal banking operations.
There are three legal ways to profit from owning a bank in Mozambique. First is by the parent bank selling "technical assistance" to the Mozambican subsidiary. Second is by holding the foreign deposits of the Mozambican subsidiary, and paying less than the market interest rates. This makes a bank like BCM somewhat more interesting than it might otherwise be.
Closely linked to foreign exchange operations is transfers of illegal money, known as "money laundering", and this is an important aspect of Mozambican bank corruption, according to all of the former bank officials we talked with.
"Laundering" is converting "dirty" or illegal money - bribes and kickbacks, money skimmed from aid contracts, income not declared for tax purposes, profits from drug dealing, and money stolen from the banks - into "clean" or legal money, eventually depositing it in a bank account, preferably abroad, where the money can actually be used. The millions of dollars from the Mt 144 bn fraud allegedly deposited in London had been "laundered", because it could be drawn from an account in a major British bank.
Money laundering is a major international issue, and banks are supposed to know that the source of large deposits and large transfers is legitimate before they are accepted. But a former bank official told us: "If you want to make a deposit, no one in Maputo will ask you where the money comes from."
Some of the money is initially in cash and so passes through the exchange bureaus, which are an important focus of bank corruption. Indeed, Mozambique imports $10 mn per week in banknotes, and some of it is exported in cash, literally carried out in suitcases. Diamantino dos Santos, the corrupt Maputo city prosecutor, alleged that Alberto Calú was selling "substantial quantities of forex to individuals in violation of exchange control laws." Calú was responsible for foreign exchange in BCM before privatisation and in the Simões era.
Money laundering and illegal transfers of money abroad have been an issue since the mid-1980s. One common form of money laundering, according to a senior bank official, is for a company to present an import document for, say, $2 mn. Money is legitimately authorised to be transferred abroad to pay the charges. But for a commission, the bank declines to stamp the original of the import document, so the importer can then go to another bank and make the same payment again, and then to a third bank. One bank actually questioned such a transaction by a well known trading company seen as being close to Frelimo, and the office of President Chissano intervened to resolve the problem, the banker said.
Writing in Savana (7 Apr 2000), an un-named ex-director of BCM claimed that in the early 1990s, BCM was involved in illegal transfers of funds abroad and in money laundering. Bankers also point to Banco Austral. Its main computer was the SBB computer in Malaysia; having the main computer outside the control of the Mozambican authorities would facilitate money laundering.
But it was violence in 1997 linked to Mozambique's first new private bank, Banco Internacional de Moçambique (BIM), that brought home the importance of the issue. BIM, which opened in 1995, is owned 50% by Banco Comercial Português (BCP), 25% by the World Bank's International Finance Corporation, 22.5% by the Mozambican state (Estado Moçambicano 8.75%, INSS - Instituto Nacional de Segurança Social 7.5%, EMOSE - Empresa Moçambicana de Seguros 6.25%), and 2.5% by Graça Machel's Fundação para o Desenvolvimento da Comunidade (FDC). BIM's President (PCA) is former Prime Minister Mário Fernandes da Graça Machungo and its Managing Director was from BCP, José Alberto de Lima Félix. Banking sources say that although Machungo is in overall control, most key day-to-day decisions are taken by Portuguese staff named by BCP.
"Private banking" is a branch of banking in which wealthy customers receive personal treatment and are helped to use offshore tax havens and other devices. Experts consider private banking one of the financial services most vulnerable to money laundering. Jorge Correia Rijo was director of private banking for BCP in Portugal, but he was dismissed in March 1997 and charged with fraud in August 1997. He is said to have diverted hundreds of millions of dollars, particularly from Angolans but also Mozambicans. He issued what looked like BCP receipts, but in fact kept the money for himself. The head of one Mozambican trading company is said to have lost $5 mn. Surprisingly, Rijo fled to Mozambique, where he seemed to be protected. In October 1997 he was involved in a suspicious accident when his car overturned near Xinavane. The ambulance that was moving him to a Maputo hospital was itself then involved in an accident.
The newly established BIM had quickly attracted substantial foreign currency deposits, in part because it was the first bank to allow withdrawals from non-metical accounts without advance notice. But the Rijo case raised questions about possible money laundering at BCP and BIM. The BCP-appointed Managing Director of BIM, José Alberto de Lima Félix, began looking more closely at this issue, and at the beginning of December found things which worried him. He was shot and killed in front of a friend's house on Av. Armando Tivane at 20.20 on 2 December 1997 - before he was able to tell anyone else what he had found. Three people were convicted of the killing, which was blamed on a botched car hijacking. Friends of Lima Félix and senior banking officials reject this and say he was killed because he had discovered something about money laundering.
|
0.986719 |
Some regimens of chemotherapy mediate direct cytotoxic effects on the tumor and, in addition, elicit indirect antitumor effects resulting from the immunogenicity of cell death (1, 2). Anthracyclines, oxaliplatin, and ionizing irradiation can trigger immunogenic cell death, whereas many other anticancer agents fail to do so. We delineated the molecular mechanisms underlying the recognition of dying tumor cells by dendritic cells (DC) and found two major checkpoints that dictate the immunogenicity of cell death. First, optimal phagocytosis of chemotherapy or radiotherapy-treated tumor cells by DC requires the translocation of endoplasmic reticulum (ER)-resident calreticulin and disulfide isomerase ERp57 to the plasma membrane of dying tumor cells (3–5). Second, the chromatin-binding high mobility group box 1 protein (HMGB1) must be released by dying tumor cells and must bind to its receptor toll-like receptor 4 (TLR4) on DC to facilitate antigen processing of the phagocytic cargo (6). However, additional signals emanating from dying tumor cells are required for the full-blown maturation and differentiation of DC and T cells, respectively. Therefore, we addressed the contribution of damage-associated molecular patterns (DAMP) to the efficacy of chemotherapy in mouse tumor models (7).
Immunosurveillance of tumors involves lymphocytes and requires an intact interferon γ (IFNγ)/IFNγR pathway (8–10). To investigate the contribution of these immune effectors in the efficacy of chemotherapy, we compared chemotherapy-induced tumor growth retardation in immunocompetent mice and in mice bearing various immunodefects affecting B and/or T cells [Rag−/−, nu/nu, or wild-type (WT) mice treated with a depleting anti-CD8 mAb] or the IFNγ/IFNγR system (Ifng−/−, IfngR−/−, or WT treated with neutralizing anti-IFNγ antibodies). The depletion of CD8+ T cells or the removal of the IFNγ/IFNγR pathway reduced the therapeutic effects of oxaliplatin against CT26 colon cancer, EL4 thymoma, and abolished the antitumor effects of anthracyclines (doxorubicin or mitoxanthrone) against CT26 cells or MCA-induced sarcomas. In contrast, other molecules such as IL-12Rβ2, perforin, and TRAIL were dispensable for the efficacy of these cancer chemotherapies in the same tumor models. Chemotherapy promoted a tumor-specific T-cell response that was detectable in the tumor-draining lymph nodes (LN), 7 to 10 days after the systemic administration of oxaliplatin. After ex vivo restimulation with tumor antigens, oxaliplatin-induced CD4+ T cells mainly produced interleukin-2 (IL-2), whereas chemotherapy-induced CD8+ T cells produced high levels of IFNγ. These chemotherapy-induced T-cell responses apparently resulted from local cancer cell death because they were detectable in draining LN (but not distant LN) and because dying (but not living) tumor cells inoculated in the footpad could mimic these immune effects (7).
In conclusion, tumor regression promoted by oxaliplatin or anthracyclines elicits a tumor-specific IFNγ-polarized CD8+ T-cell immune response, which is critical for the efficacy of chemotherapy.
Inoculation of anthracycline or oxaliplatin-treated tumor cells in the footpad induced a potent IFNγ-polarized CD8+ T-cell response relying on TLR4 signaling in DC (6). We investigated the possible contribution of other receptors for DAMPs that could account for the priming of IFNγ-producing T cells during anticancer chemotherapy. In addition to TLR, NOD-like receptors (NLR) recognize DAMPs and link inflammation to innate immunity (11, 12). In macrophages, the so-called “inflammasome” serves as a central sensor for DAMPs (13). The NLRP3 (CIAS1/CRYOPYRIN)-inflammasome is a multimeric protein complex that activates the protease caspase-1, which mediates the processing and secretion of pro-inflammatory cytokines (IL-1β, IL-18, IL-33). The NLRP3 inflammasome can be activated by endogenous danger signals (DAMPs) or pathogen-associated molecular patterns (PAMP), which all act in concert to induce K+ efflux (14–17). Gain-of-function mutations in the human NLRP3 gene cause autoinflammatory disease that can be suppressed by the systemic administration of IL-1R antagonists (18, 19). To investigate the putative role of the NLRP3 inflammasome in anticancer chemotherapy, we compared tumor growth kinetics of EL4 sarcomas treated with oxaliplatin in WT mice versus mice deficient in each component of the inflammasome [NLRP3, its adaptor molecule apoptosis-associated speck-like protein containing a caspase recruitment domain (ASC) or caspase-1] or its final product IL-1β. Each individual player of this complex was required for the success of chemotherapy. The key contribution of IL-1β in the antitumor effects mediated by anthracyclines and/or oxaliplatin was also shown in CT26 colon cancer and more importantly in spontaneous tumors induced by MCA (7).
To analyze the role of the NLRP3 inflammasome in the antitumor immune response elicited by chemotherapy, we inoculated live versus dying tumor cells exposed to chemotherapy ex vivo and determined the capacity of draining LN T cells to produce IFNγ in response to tumor antigens. In these circumstances, T cells from mice deficient in individual components of the inflammasome/IL-1β axis (NLRP3, Casp-1, IL-1R1) failed to mount optimal IFNγ polarized CD8+ T-cell responses. Moreover, caspase-1-deficiency resulted in deficient IFNγ production by tumor antigen-specific CD8+ T cells, whereas normal mice mounted such T-cell responses in the draining LN of established tumors treated by systemic oxaliplatin injections (7). Hence, we showed that the NLRP3 inflammasome is critical for the immunogenicity of cell death triggered by anthracyclines, oxaliplatin, or x-rays.
Multiple distinct bacterial products or endogenous damage signals (such as toxins, ATP, uric acid crystals, alum, silica) stimulate the NLRP3 inflammasome resulting in the proteolytic auto-activation of caspase-1 (15, 20, 21). One of the most pleiotropic activators of the NLRP3 inflammasome is extracellular ATP, which is released from stressed cells and acts on purinergic receptors, mostly of the P2 × class (22). To our surprise, multiple distinct cell death inducers (cadmium, etoposide, mitomycin C, oxaliplatin, cis-platin, staurosporine, thapsigargin, mitoxanthrone, doxorubicin) induced the release of ATP in vitro from dying tumor cells, by 8 to 20 hours postexposure (7, 23). This ATP release became undetectable when cells were incubated with the ATP-degrading enzyme apyrase or with inhibitors of ATP synthesis such as antimycin A plus deoxyglucose (A/D) or the oxidative phosphorylation uncoupler 2,4-dinitrophenol (DNP). When oxaliplatin-treated EG7 or anthracycline-treated CT26 cells were admixed with ATP scavengers (A/D, apyrase, DNP) or nonselective P2× receptor antagonists (iso-pyridoxalphosphate-6-azophenyl-2′,5′-disulphonate or oxidized ATP), they lost their capacity to elicit protective antitumor immune responses upon subcutaneous inoculation into normal mice. ATP depletion or P2× receptor blockade also abolished the capacity of oxaliplatin-treated EG7 cells [which express the model antigen ovalbumin (OVA)] to prime OVA-specific cells for IFNγ production (7). Altogether, we generalized the finding that dying tumor cells release ATP, which is indispensable for their immunogenicity. In contrast, the immunizing capacity of candidate antigen proteins admixed in TLR adjuvants is not blocked by P2× receptor antagonists and hence is likewise independent of endogenous ATP.
Because the high affinity receptor for ATP is the purinergic receptor P2RX7, we assessed the ability of p2rx7−/− mice to mount a chemotherapy-induced IFNγ polarized CD8+ T-cell response and to control the growth of established tumors after chemotherapy. Importantly, oxaliplatin failed to control tumor progression in p2rx7−/− mice, and, similarly, oxaliplatin-treated OVA expressing EG7 could not elicit OVA-specific T-cell responses in LN harvested from p2rx7−/− mice. Next, we showed that host DC were the cells harboring functional P2RX7 receptors during chemotherapy. In transgenic mice expressing the diphtheria toxin receptor on their DC (under the control of the CD11c promoter), diphtheria toxin depletes LN DC, and this maneuver readily abolishes the CD8+ T-cell response elicited by dying EG7 cells. The adoptive transfer of bone marrow-derived WT (but not in p2rx7−/−) DC loaded with dying EG7 restored the OVA-specific CD8+ T-cell response in p2rx7−/− mice. Accordingly, WT DC loaded with oxaliplatin-treated EG7 cells in vitro produced IL-1β in a NLRP3-, ASC-, and casp-1-dependent fashion, in conditions in which IL-12p40 secretion was independent of the NLRP3 inflammasome. The PAMP lipopolysaccharide (a TLR4 ligand) and the DAMP ATP reportedly act in concert to ignite the NLRP3 inflammasome in macrophages (17). Accordingly, we found that the TLR4 ligand HMGB1 and ATP, which are both released by tumor cells exposed to cytotoxic agents, act in concert to promote IL-1β secretion by DC ex vivo. The treatment of DC with oxidized ATP or anti-HMGB1-neutralizing antibodies prior to loading with dying tumor cells completely abolished IL-1β secretion (7).
As discussed above, ATP released by dying tumor cells engages P2RX7 receptors on DC, thereby triggering the NLRP3 inflammasome that culminates in IL-1β production and DC-mediated IFNγ polarized CD8+ T-cell response. How then could IL-1β contribute to the immunogenicity of cell death? We hypothesized that IL-1β may control and/or switch the quality of the priming of naive CD8+ T cells during the treatment of a tumor by chemotherapy (7). We produced three lines of evidence supporting this contention.
First, DC loaded with dying EG7 cells induced the polarization of naive OT-1 cells (which express a transgenic T-cell receptor that recognizes an OVA-derived peptide, SIINFEKL) into IFNγ producing lymphocytes in a caspase-1- and IL-1β-dependent manner in vitro and in vivo. IL-1β receptor antagonist or neutralizing anti-IL-1β antibodies suppressed IFNγ polarization in this in vitro priming system. Similarly, OT-1 cells were activated (CD69 expression) upon their adoptive transfer into WT mice, but not into casp-1 −/− mice, following immunization with dying EG7 cells. IL-1RA also prevented the expansion of SIINFEKL/Kb tetramer-specific T cells in LN draining the immunization with dying EG7. Second, activation of naive CD3+ T lymphocytes by a cocktail of anti-CD3/anti-CD28 mAb in the presence of IL-1β (but not IL-6 or TNFα) resulted in a IFNγ polarizing effect that was similar to the one described for IL-12 (7). Third, the failure of p2rx7−/− and casp-1−/− mice to mount OVA-specific IFNγ-polarized CD8+ T-cell response after immunization with oxaliplatin-treated EG7 could be overcome by recombinant IL-1β coinjected with the vaccine (7).
In conclusion, IL-1β contributes to the full differentiation of IFNγ polarized CD8+ T cells during the priming of antitumor immune responses that are elicited as a byproduct of anticancer chemotherapy.
Our results identify tumor-derived ATP as a new DAMP, which is required for cancer cell death to be immunogenic. Our data are compatible with a scenario (Fig. 1) in which ATP activates P2RX7 receptors on DC, thereby stimulating the aggregation and/or activation of the NLRP3/ASC/Casp-1 inflammasome, the proteolytic maturation of caspase-1, pro-IL-1β cleavage, and consequent IL-1β release. IL-1β then is required for the priming of IFNγ-producing tumor antigen-specific CD8+ T cells (7). In accord with previous studies (24), IFNγ, rather than perforin or TRAIL-dependent cytotoxic activities, mediates the anticancer activity of T lymphocytes that have been primed in a P2RX7/NLRP3/ASC/Casp-1/IL-1β-dependent fashion.
The integrity of the NLRP3 inflammasome is required for IFNγ polarized CD8+ T-cell response with dying tumor cells. During a hit with chemotherapy, dying tumor cells are captured and processed by host DC triggering a tumor-DC crosstalk dictated by ATP/P2RX7 and HMGB1/TLR4 molecular interactions. ATP and HMGB1 act in concert to ignite the NLRP3 inflammasome in DC, culminating in IL-1β processing and secretion in the extracellular milieu. IL-1β then contributes to the TCR-driven IFNγ polarized CD8+ T-cell response, a mandatory step for the efficacy of chemotherapy.
New links between inflammasomes and cognate immunity are being progressively unraveled. Eisenbarth and colleagues (25) showed that the NLRP3 inflammasome pathway stimulated by aluminum-based adjuvant (alum) directs a humoral adaptive immune response. Mice deficient in Nlrp3, ASC, or caspase-1 failed to mount an antibody response to antigen administered with alum, yet generated normal responses against antigens emulsified with complete Freund's adjuvant. The Nlrp3 inflammasome was required for the production of IL-1β and IL-18 by macrophages in response to alum (25). It is noteworthy that in response to dying tumor cells, IL-18 was not produced by P2RX7/NLRP3-activated DC and IL-18R was not required for the immunogenicity of cell death in vivo (7). During lung infection with influenza A virus, ASC and caspase-1 (but not NLRP3) were required for CD4+ and CD8+ protective T-cell immunity, mucosal IgA production, and systemic IgG antibody responses (26), suggesting that each immune response relies on a specific mode of inflammasome activation and perhaps a specific pattern of inflammasome-dependent cytokines.
Grant Support: G. Kroemer and L. Zitvogel are supported by grants from the Ligue Nationale contre le Cancer (équipes labellisées), Fondation pour la Recherche Médicale, European Union, Cancéropôle Ile-de-France, Institut National du Cancer, Association for International Cancer Research, and Agence Nationale pour la Recherche. M.J. Smyth is supported by the National Health and Medical Research Council of Australia and the Victorian Cancer Agency. A. Tesniere is supported by INSERM (poste d'accueil).
Note: G. Kroemer, M.J. Smyth, and L. Zitvogel share senior co-authorship.
. Caspase-dependent immunogenicity of doxorubicin-induced tumor cell death. J Exp Med 2005;202:1691–701.
. Calreticulin exposure dictates the immunogenicity of cancer cell death. Nat Med 2007;13:54–61.
. The co-translocation of ERp57 and calreticulin determines the immunogenicity of cell death. Cell Death Differ 2008;15:1499–509.
. Mechanisms of pre-apoptotic calreticulin exposure in immunogenic cell death. EMBO J 2009;28:578–90.
. Activation of the NLRP3 inflammasome in dendritic cells induces IL-1β-dependent adaptive immunity against tumors. Nat Med 2009;15:1170–8.
. IFNγ and lymphocytes prevent primary tumour development and shape tumour immunogenicity. Nature 2001;410:1107–11.
. Immune surveillance of tumors. J Clin Invest 2007;117:1137–46.
. Nod-like proteins in immunity, inflammation and disease. Nat Immunol 2006;7:1250–7.
. Toll-like receptors and innate immunity. Nat Rev Immunol 2001;1:135–45.
. Inflammatory caspases and inflammasomes: master switches of inflammation. Cell Death Differ 2007;14:10–22.
. Bacterial RNA and small antiviral compounds activate caspase-1 through cryopyrin/Nalp3. Nature 2006;440:233–6.
. Cryopyrin activates the inflammasome in response to toxins and ATP. Nature 2006;440:228–32.
. Gout-associated uric acid crystals activate the NALP3 inflammasome. Nature 2006;440:237–41.
. Critical role for NALP3/CIAS1/Cryopyrin in innate and adaptive immunity through its regulation of caspase-1. Immunity 2006;24:317–27.
. NALP3 forms an IL-1β-processing inflammasome with increased activity in Muckle-Wells autoinflammatory disorder. Immunity 2004;20:319–25.
. The systemic autoinflammatory diseases: inborn errors of the innate immune system. Curr Top Microbiol Immunol 2006;305:127–60.
. Critical role for Cryopyrin/Nalp3 in activation of caspase-1 in response to viral infection and double-stranded RNA. J Biol Chem 2006;281:36560–8.
. The inflammasome: first line of the immune response to cell stress. Cell 2006;126:659–62.
. Liaisons dangereuses: P2X(7) and the inflammasome. Trends Pharmacol Sci 2007;28:465–72.
. Chemotherapy induces ATP release from tumor cells. Cell Cycle 2009;8:3723–8.
. A critical requirement of interferon γ-mediated angiostasis for tumor rejection by CD8+ T cells. Cancer Res 2003;63:4095–100.
. Crucial role for the Nalp3 inflammasome in the immunostimulatory properties of aluminium adjuvants. Nature 2008;453:1122–6.
. Inflammasome recognition of influenza virus is essential for adaptive immune responses. J Exp Med 2009;206:79–87.
|
0.941438 |
What steps can an enterprise take to protect the environment from the dangers of pollution?
Q.5:- What steps can an enterprise take to protect the environment from the dangers of pollution?
(i) Top Management Commitment The first and the foremost step is to have a definite commitment by top management of the enterprise to create, maintain and develop work culture for environmental protection.
(ii) Involving Employees at All Levels Second step is to commitment to environmental protection is shared throughout the enterprise by all divisions and employees as they will actually implement the environment protection programmes and policies.
(iii) Laying Down Policies for Environment Protection Another important step is to develop clear-cut policies and progrmmes for purchasing good quality raw materials, employing superior technology, using scientific techniques of disposal and treatments of wastes and developing employee skills for the purpose of pollution control.
(iv) Legal Compliance A very important and essential step is to comply with the laws and regulations enacted by the Government for prevention of pollution.
(v) Voluntary Participation Participation in government programmes relating to management of hazardous substances, clearing up of polluted rivers, plantation of trees and checking deforestation is also an important step in environmental protection by business enterprises.
(vi) Measuring Results Periodical assessment of pollution control programmes in terms of costs and benefits is also essential in order to have a steady progress with respect to environmental protection.
(vii) Education and Training Another step that can be taken for environmental protection is arranging educational workshops and training materials to share technical information and experience with suppliers, dealers and customers to get them actively involved in pollution control programmes.
|
0.992824 |
How many shopping centers have a parking lot as beautiful as San Ysidro Village?
We designed a sustainable parking area using conscientious materials and rain water treatment systems such as permeable pavers and bioswales for storm water run off. We used repurposed bricks salvaged from the Midwest and planted a colorful, scented landscape which blooms year-round, full of pungent flowers, natal plum, citrus trees, and bougainvillea. Defining details of the project include Santa Barbara sandstone curbs and walls, heirloom gas lamps for ambient lighting, and flower medallions that line parking spots and never need to be repainted. In addition, we preserved existing oak trees that adorned the property and maintained a rustic atmosphere by using California native plants.
We believe the customer experience begins as soon as one enter’s the parking lot.
|
0.999989 |
Keeping their teen safe on the road is an important mission for parents. Here are eight tips for teen driving safety.
1. Know your teen. Not all teenagers mature at the same time. Parents need to determine if their teen is emotionally ready and responsible enough to drive.
2. Practice improves safety. Provide opportunities to expose your teen to complex intersections and traffic circles to increase your teen's driver skills.
3. Reiterate the importance of "no cell phones while driving". This includes cell phone talking or texting.
4. Know the road conditions. Provide safe exposure (perhaps in a parking lot) of having your teen drive in rain, snow and ice conditions to learn what to do in case of hydroplaning or a skid.
5. Emphasize the importance of car maintenance. Proper oil levels, air tire pressure, and tire tread are just some of the important maintenance precautions your teenager should know about.
6. Invest in a roadside assistance program. If you don't already have one, invest in a roadside assistance program that your teen can access in case of an emergency.
7. Discuss defensive driving techniques. Even minor fender/benders can hike up premiums, and traffic violations can be costly financially and to your teen's driving license record.
8. Stress the importance of wearing a seatbelt. The bottom line: buckling up saves lives.
|
0.999985 |
One of the important architectural insights from information theory is the Shannon source-channel separation theorem. For point-to-point channels, the separation theorem shows that one can compress a source separately and have a digital interface with the noisy channel coding; and that such an architecture is (asypmtotically in block size) optimal. Therefore the importance of this is that one can 'layer' the architecture by separating the data compression into bits and the 'physical layer' of coding for noise. The optimality of this attractive architecture is known to break down in networks, for example for broadcast channels or multiple access channels. Nonetheless, this architecture is the basis for network layering in many of the current network architectures. Therefore, we have been studying the 'cost' of separation, that is, how much do we lose through separation. We have also studied special situations where one can demonstrate explicit optimal (hybrid) source-channel coding strategies.
For lossy source coding in general communication networks we have shown that the separation approach is optimal in two general scenarios, and is approximately optimal in a third scenario. These results are shown without explicitly characterizing the achievable distortions, or characterizing the rate-distortion regions of a separation approach. Such implicit characterizations of properties (first demonstrated in a related problem by Koetter, Effros and Medard, 2009) could provide a new tool to gain insight into network information theory problems.
The first general scenario, where we demonstrate optimality of separation, is when memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal and memoryless. For discrete networks, this result is a generalization of the result of Koetter, Effros and Medard (2009) for message transmission over noisy graphs, where they demonstrated a separation of network coding and channel coding. In our problem, the extracted pure network source-coding problem, due to the network connectivity, reveals the importance of interaction in such network data compression problems. We believe that this motivates a distinct research direction into interactive network source coding, which has not received a great deal of attention in literature.
The second general scenario, where we demonstrate optimality of separation, is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including finite-memory multi-user channels such as multiple access, broadcast, interference and relay channels.
The third general scenario, where we demonstrate approximate optimality of separation, is a relaxed version of the second scenario. Here we allow each independent source to be reconstructed at multiple destinations, with different distortion requirements. The network is still assumed to be general, i.e., including broadcast, multiple access interference and finite memory, but we restrict our attention to distortion metrics which are difference measures. For this restricted class of distortion measures we demonstrate that the loss of separation is bounded by a constant number of bits. For the important special case of quadratic distortion measures, this constant is shown to be universal across all required distortions, and is upper bounded by 0.5 bits per user requiring the same source.
The above work has been a generalization of previous work on separation over Gaussian broadcast channels, where approximate optimality of separation was established. For a special case of bi-variate Gaussian sources over Gaussian broadcast channels, we have established optimality of a hybrid analog-digital source-channel coding scheme, which we believe is the first such example.
C. Tian, J. Chen, S. Diggavi, and S. Shamai, "Matched Multiuser Gaussian Source Channel Communications via Uncoded Schemes," IEEE Transactions on Information Theory, 2017.
C. Tian, J. Chen, S. N. Diggavi, and S. Shamai, "Optimality and Approximate Optimality of Source-Channel Separation in Networks," Information Theory, IEEE Transactions on, vol. 60, iss. 2, pp. 904-918, 2014.
We consider the source-channel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal complement each other: the first is when the memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal, and memoryless point-to-point channels; the second is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including multi-user channels, such as multiple access, broadcast, interference, and relay channels, possibly with feedback. The third scenario, for which we demonstrate approximate optimality of source-channel separation, generalizes the second scenario by allowing each source to be reconstructed at multiple destinations with different distortions. For this case, the loss from optimality using the separation approach can be upper-bounded when a difference distortion measure is taken, and in the special case of quadratic distortion measure, this leads to universal constant bounds.
C. Tian, A. Steiner, S. Shamai(Shitz), and S. N. Diggavi, "Successive Refinement via Broadcasting: Optimizing Expected Distortion of a Gaussian Source over a Gaussian Fading Channel," IEEE Transactions on Information Theory, vol. 54, iss. 7, pp. 2903-2918, 2008.
We consider the problem of transmitting a Gaussian source on a slowly fading Gaussian channel, subject to the mean squared error distortion measure. We propose an efficient algorithm to compute the optimal expected distortion at the receiver in linear time O(M), when the total number of possible discrete fading states is M. We also provide a derivation of the optimal power allocation when the fading state is a continuum, using the classical variational method.
C. Tian, S. N. Diggavi, and S. Shamai, "The achievable distortion region of bivariate Gaussian source on Gaussian broadcast channel," in Proc. Proc. of IEEE ISIT 2010, Austin, Texas, 2010, pp. 146-150.
C. Tian, A. Steiner, S. Shamai, and S. N. Diggavi, "Expected distortion for Gaussian source with broadcast transmission strategy over a fading channel," in Proc. IEEE Information Theory Workshop (ITW) Bergen, Norway, 2007, pp. 42-46.
S. Dusad, S. N. Diggavi, and A. R. Calderbank, "Cross Layer Utility of Diversity Embedded Codes," in Proc. IEEE Conference on Information Sciences and Systems, Princeton, 2006.
|
0.960089 |
Why do some companies need an app?
Every business or company these days need a mobile app for numerous reasons. The main reason for introducing company's mobile app is in order to increase sales. Having a mobile app also helps business owners becoming leaders in their industries while not being left behind. Recent studies have shown that mobile searches out-numbered that traditional Google searches on a desktop in ten countries including the United States. This resulted in mobile commerce to start growing rapidly and many businesses both small and large decided to follow this latest mobile trend and introduce their mobile apps. These days not having a website which is mobile friendly, in fact, means kissing your customers goodbye. It is very hard to stay in the game as the competition is growing stronger every day. Therefore, businesses need to devote to creating their user-friendly website and mobile apps specifically tailored for use on smartphones and tablets. If a business or company failed to follow this latest mobile trend and this golden rule, its competitors will take over the industry in no time and left a business without a mobile app behind. That is why the demand for companies offering mobile app development as well as websites is so high in recent years.
These days people use their tablets and smartphones to browse the Internet, in addition, using their devices to download mobile apps. Recent studies conducted by Yahoo have shown that around ninety percent of people's time spent on their smartphones is spent utilizing mobile apps. It is equally important to have company's website that is mobile-friendly and to have a mobile app as they are many benefits for businesses. Mobile apps these days are the next mobile websites and in order to stay relevant and in the game, every business and company needs one. Everyone in the industry is doing it, and no company or business want to be left behind. Recent studies have shown around forty-two percent of small businesses have already created their mobile apps and this trend has been rapidly increasing in the past two years. Having a website greatly increases sales. If a company builds a mobile app which makes the overall buying experience easier, customers will love it which leads to increase in sales. Having a website also helps business owners in order to become leaders in their industries. Introducing a mobile app can greatly help business or company in order to stand out from its competition, especially if a company is the first in its industry to introduce one. Investing in innovation is always good regardless of what kind of business.
Having a mobile app greatly helps companies and businesses in order to provide a superior user experience. A mobile app seamlessly guides customers where they want to go and towards what they want to do while they have that more personalized experience. Benefits of superior user experience include more positive reviews, more followers on social media accounts, a greater number of brand loyalists and more repeat customers. Having a mobile app also helps business in enhancing communication with its customers and users. A mobile app provides that direct communication channel to users. Push notifications easily inform customers and users about new services and products, upcoming promotion and events and more. A business or company that has a mobile app makes the purchase process easier. An integrated electronic payment system streamlines checkout while making it way easier for users to buy products while at the same time minimizing customer service staff. Having a mobile app also allows businesses' customers to easily access to rewards or loyalty programs which encourage those repeat customers. Users could use rewards and loyalty program accessed from their smartphones in order to monitor their points and rewards. Having a mobile app that features this program greatly increases the number of people using company's products and services.
Mobile apps are easily-accessible while constantly reminding customers and users of a certain brand as it can be seen on their mobile screens. Having a mobile app greatly helps business in order to build a stronger brand and gain recognition while promoting social integration and increasing customer engagement. Having a mobile app greatly maximizes overall customer engagement with a brand and allows customers to easily connect with business or company on social media. Having a mobile app also opens that great doors for the next technological trend. In order to stay in the game and ahead of the competition, a company should follow latest trends and stay-up-to-date with the latest technology both internally and externally.
Copyright © NimbleWorks 2017. All rights reserved.
|
0.999944 |
I've lost my bank card http://threesistersfarmtx.com/about/ cost of generic accutane "We very much think of the euro area as a beautiful ship that has been built, nurtured... for the soft seas, but which is not yet completely finished for the rough ones," Ms Lagarde told delegates.
I'm only getting an answering machine http://201stanwix.com/faq/ zoloft 50 mg Alain Vigneault and the Rangers (2-4-0) now must proceed without three of their top-six forwards, beginning Saturday night against the Devils in Newark, the seventh of nine straight road games to open their season.
A financial advisor http://agrimeetings.com/contact-us/ buy tetracycline 500mg Chandler told soldiers that Secretary John McHugh had approved but not yet signed off on the changes to Army Regulation 670-1, which oversees grooming, tattoos and uniforms, the Stars and Stripes said.
Looking for a job http://lauralippman.net//bio/ topamax xr Yes there are a number of options available, you can set your browser either to reject all cookies, to allow only "trusted" sites to set them, or to only accept them from the site you are currently on.
I'm doing a phd in chemistry http://lauralippman.net//bio/ cost topamax 25mg She has also told him to let her know if he is not enjoying the course. “I worry that without the structure of school, or me to nag him out of bed, he may not attend lectures,” she says.
Could you ask her to call me? http://www.gleefulmusic.com/purchase/ mebendazole 100mg The potential exists for even more rulings against the Government as a further series of controversial applications to Strasbourg are awaiting finals decisions by the court on whether a hearing will go ahead.
I wanted to live abroad http://www.djbrecycling.co.uk/weee-recycling/ synthroid 0.2 mg The privacy of the home also helps in discovering new products. Even in a multi-brand departmental store, women find buying lingerie uncomfortable and asking for different types of products is not easy.
What do you do? http://lauralippman.net//bio/ topamax for pain Obama is also expected to meet with Palestinian President Mahmoud Abbas on Tuesday, their first opportunity to discuss the status of direct talks with Israelis that the U.S. helped jumpstart earlier this year.
This is the job description http://201stanwix.com/faq/ zoloft generic cost On current prices, the shift would see the cost of carbonpermits fall from the planned A$25.40 ($23.09) per tonne fromJuly 2014 to around A$6 per tonne, saving big businessesbillions of dollars in carbon costs.
I was made redundant two months ago http://lauralippman.net//bio/ topamax cost in canada Two months ago the general ousted former President Mohammed Mursi. New elections are expected early next year. Bihira Galal, is the co-owner of the Kakao chocolate shop which is selling the chocolates.
|
0.92567 |
You might be looking for the perfect wedding venue. You might be concerned about finding several accommodation that does meet your demands. You will have to make sure that you do aim to figure out what must be done exactly. You will have to find the exact number of people who do want to attend the ceremony first. You might then end up having to pay for the extra food. Do make sure that you do ask the venue about the extra food that they will provide for you during the hour. Here are some tips on finding the perfect wedding venue: ESTIMATE THE NUMBER OF PEOPLE ATTENDING You must think about the number of people attending the event. There are many different weddings out there. Some can be big in size while others can be small. You must think about the number of people who do plan on attending the event, you will have to look to purchase many different items. This can all be too much for your budget to handle too. Do think about the beach wedding theme you have in mind first. HAVE A SIT DOWN WITH YOUR PARTNER FIRST You must look to have a sit down with your partner first. You must do enough homework in order to find out as to what the rates for your wedding are like. You must think about the location, the price as well as the type of venue that you do have in mind. You will have to think about the budget as well as the venue. Do think about how you can book a place first. Do make sure that you do try to cover all the relative expenses as quickly as you can. SPEAK TO THE WEDDING ORGANIZERSDo try to speak to the wedding organizers about what must be done. You will have to think about what the organizers do have in mind and what you are prepared to do about it. Do make sure that you do visit the venue well during a particular season before you do decide to get married. Do think about the different unique wedding venues you can afford first. PICK A SPECIFIC SPECIAL DATEYou must look to pick a special wedding date. Do make sure that you do have different couples who can look to get married on the exact day. This can bring the cost down a great deal further. Others can be the religion as well as the culture issue, sometimes you might have to think about the time of the year as carefully as you can. You will have to think about the place as carefully as you can. Do make sure that you do consider these options as carefully as you can.
Tips To Improve Your Home Garden.
Next What Are The Differences Of New And Old Storage Frameworks?
|
0.905396 |
Oka castle (岡城) is located over a long hill of 50 meter height from hillside at the east of Bungo Takada city. Takeda area is a small basin in the southwestern part of Bungo province (Oita prefecture), and is a connecting point of various roads from Oita city at northwest direction, Saeki city at southwest direction, South part of Fukuoka prefecture at northwest direction through Hita area, and Kumamoto area at southwestern direction passing Mt. Aso. Because of its geographical condition, Oka castle had been an important strategic point of Bungo province for both of defense and offenece.
The site of Oka castle is a natural stronghold and an ideal place to build a castle. The castle stands on the slope separated by deep valley of Inaba-gawa river and Tamaki-gawa river, tributary of Ono-gawa river merging at the east of the castle. Two rivers flow side by side along the hill, and approaches at the western hillside of the castle like a pincet.
Castle area spread at the narrow east part of the hill, and castle town was built at the wider hillside area at east part and outer barrier was built at the approaching point of two rivers. Furthermore, as it is originally flat slope engraved by water, hilltop area is relatively and suitable for dwelling.
Precise year is unknown but Oka castle was built by local lord Ogata clan by the end of 12th century.There is a tale that Ogata clan originally built Oka castle to invite Yoshitsune Minamoto (1159-1189), the younger brother of Yoritomo Minamoto (1147-1199) who was the founder of Kamakura Shogunate.
Yoshitsune had a brilliant tactical skill and activated at the battle against Taira clan, but later became discord with Yoritomo and tried to fight against Yoritomo. But Yoshitune failed this attempt and escaped to Ohshu Fujiwara clan at Tohoku region, and Ogata clan was also exiled.
Under Kamakura Shogunate, Otomo clan who was the close retainer of Yoritomo Minamoto was appointed as the governor of Bungo province. Otomo clan activated at the invasion of Chinese Yuan Empire and Koryo Kingdom in 1274 and 1281, and secured their position as a feudal lord. In Muromachi Era, Otomo clan suffered severe internal conflict but overcame it and grew as a warlord since the beginning of 16th century.
Under Otomo clan, Oka castle was held by Shiga clan which was a branch family of Otomo clan. Shiga clan was one of the three major branch family of Otomo clan along with Takuma clan and Tabara clan, and activated at the battle against Kikuchi clan which held Higo province (Kumamoto prefecture) then held the inner part of the province at the border to Higo province.
In the middle of 16th century, Otomo clan became its peak period under Yoshishige Otomo (1530-1587, famous as his Buddhist name Sorin). Otomo clan directly or indirectly controlled north half of Kyushu province, and rejected Ouchi clan or Mouri clan which were the lords of Honshu mainland from prosperous Hakata port. Shiga clan became the Kabanshu, the signing retainers of Otomo clan, and Oka castle was strengthened as a base toward westward.
But in 1578, Yoshishige sent a large army to save Ito clan which was the governor of Hyuga province (Miyazaki prefecture) but lost their territory before the invasion of Shimazu clan, a rising warlord of Satsuma province (Kagoshima prefecture). Both army fought near Takajo castle at the side of Omaru-gawa river in the middle part of province.
Otomo army suffered fatal defeat by ambush tactics of Shimazu army called as Tsurinobuse at this battle of Mimikawa. Local lords followed to Otomo clan such as Ryuzoji clan or Akizuki clan left Otomo clan all at once, and Otomo clan suddenly fell in tough situation.
In 1586, Shimazu clan which already beat Ryuzoji clan at the battle of Okitanawate started total attack against Otomo clan. Before detached army of Shimazu army lead by Iehisa Shimazu (1547-1587), Otomo clan lost its main base Funai city (current Oita city), and Sorin barely besieged at Funai castle surrounded by dominant Shimazu army. Otomo clan lost many castles and only kept several mountain castles such as Tsunomure castle or Kitsuki castle.
A main force of Shimazu clan lead by Yoshihiro Shimazu (1535-1619), one of the strongest general in Sengoku era and feared as “Demon Shimazu”, also marched toward Oka castle from Higo province in October 1585. Shimazu army entered Bungo province under the guide of Chikazane Irita (1533-1601) who turned to Shimazu clan, and Chikanori Shiga (1535-1587), the ex-leader of Shiga clan, also changed to Shimazu side.
But Chikatsugu Shiga (1566-?), the young leader of Shiga clan, decided to resist against overwhelming Shimazu army of 30,000 soldiers at Oka castle with his army of 2,000 soldiers. Although several branch castle fell before the attack of Shimazu army, Chikatsugu firmly kept Oka castle.
Furtuermore, Chikatsugu and his force made various guerrilla battles against Shimazu army. When Shimazu army attacked the main gate of Oka castle, Chikatsugu hided matchlock gunner at the backside, and ambushed the vanguard of Shimazu army fell into trap.
Looking at this, Shimazu army avoided direct attack to Oka castle and surrounded branch castles of Oka castle. But at Danohara castle Shiga army intentionally burnt buildings and left, then attacked Shimazu army entered the castle but had to stay with no shelter and broke it. Beside, at Shinoharame castle, the commander of Shiga army made fake surrender to Shimazu army then protected its gate, then opened the gate at the arrival of Shiga army and destroyed Shimazu army.
As Shiga army stayed at the backside and attacked supply troops, Shimazu army could not move forward into the province. Furthermore, knowing the approach of overwhelming army of central ruler Hideyoshi Toyotomi (1537-1598), Shimazu army had to fall Oka castle as soon as possible.
Looking at this, Chikatsugu insisted decisive battles at Onigajo castle, the outer fort of Oka castle and told the place of shallow to approach the castle to Shimazu army. Chikatsugu hided his troops near the castle, and when Shimazu army attacked the castle, assaulted it from backside then completely broke it. As a result, Chikatsugu could keep Oka castle by the arrival of Toyotomi army, and Hideyoshi highly praised a braveness and loyalty of Chikatsugu.
Under Toyotomi government Otomo clan once survived as the lord of Bungo province, and Chikatsugu became an important retainer of Yoshimune Otomo (1558-1610). But in 1592, Otomo army made a failure in foreign expedition and was expelled and Chikatsugu also lost Oka castle.
Eight years later Yoshimune raised army to restore Otomo clan at the time of the battle of Sekigahara, but was defeated by Yoshitaka Kuroda (1546-1604, famous as Kanbe) who was the lord of Nakatsu castle and lost the chance of recovery. Chikatsugu was hired by other warlords and finally became the retainer of Hosokawa clan at Higo province.
After Shiga clan, Hideshige Nakagawa (1570-1612) was appointed as a lord of Oka castle. Hideshige was the second son of Kiyohide Nakagawa (1542-1583) who supported Hideyoshi at the battle of Yamagaki against Mitsuhide Akechi (1528-1582) in 1582 but died in the battle of Shizugatake in 1583.
Originally Hidemasa Nakagawa (1568-1592), the eldest son of Kiyohide, succeeded leader position, but died by carelessness during foreign expedition. Hideyoshi became furious for this failure but considering loyalty of Kiyoshide, admitted Hideshige to continue the clan at Miki castle (Hyogo prefecture). Next Hideshige became the lord of Oka castle.
Hideshige who entered Oka castle reformed core area of Oka castle into a modern castle. After the death of Hideyoshi, Ieyasu Tokugawa (1543-1616), the largest lord under Toyotomi government and Mitsunari Ishida (1560-1600), the chief administrative staff of Toyotomi government, struggled for next hegemony.
At the battle of Sekigahara between both party, Hideshige at first supported Mitsunari but seeing the defeat of Mitsunari, suddenly changed to Tokugawa side. Hideshige fought with Kazuyoshi Ota (?-1617) who was the lord of Usuki castle and supported Mitsunari, and barely won with severe damage.
Being evaluated this behavior, Hideshige could survived as a feudal lord of Takeda domain under Edo Shogunate. Facing many crisis, Nakagawa clan was able to continue its history. Even though not so large domain of 70,000 Koku (unit of rice harvest), Nakagawa clan continuously expanded Oka castle and finally this castle became a huge one spreads over 1 kilometer along the hill.
Oka castle spread over the hill of 90 degree rotated J letter or saxophone like hill. This castle consist of three main part of inner part, middle part and western part from eastward separated by bottlenecks. Middle part is a Y letter shaped area and main body of the castle consist of central area, secondary area and third area at each tip. Central area is a rectangular area of 100 meter long and 50 meter wide, and a three story turret which was a substitute of main tower and another turret named as treasure storage were built at both edge.
This middle part was built before 1600, and has a shape of typical mountain modern castles under Toyotomi government. Each area is protected by sheer stone walls, and both side of the part was securely protected by masugata gates (combined gate) built at bottle neck. Beautifully curved tall stone walls are directly built on the body of the hill, and horizontally raising stone walls is a picturesque and famous scenery of the castle.
Inner part of the castle was originally the main part at Shiga period and relatively keeps old shape of the castle being modified at minimum level. But Shimoharamon gate at the east edge of the castle was securely guarded by folding gate protected by tall stone walls. This inner area was treated as a sacred place for Nakagawa clan then used for the ground of tombs and shrines.
On the other hand, western part at the curve of J letter was a vast area developed in late period. Western part consist of several large flat terrace of 100 meter long, which were used as residences of the lord and relatives. Each area has turrets and gates, and in case of emergency they could work as independent forts. Main gate from hillside is built the south edge of the area, and backside gate locates at the north part.
Interestingly western area has elements of western castles such as curving vertical stone walls, broad steps with rampage or stone built virtical tower. Especially broad and gentle slope from main gate to Nishinomaru palace have nearly 50 meter width and quite resembles to western palaces on the hill such as Windsor castle.
Originally Chikatsugu Shiga was a Christian had a name of Don Paolo, and Hidenari Nakagawa was also said as a believer of Catholic. Such environment called many Christians to this area, and also brought Western culture to this place. In Oka town there was a cave church used even after banning of Catholic.
Just below the hilltop area there was a narrow terraces between mountain and river, and was used as a guardian’s place. At the west of the castle, castle town was built at flat area protected by curving river. Total length of castle including castle town exceeds 1.5 kilometer, and it was equivalent to the castle of the lord which was double or triple size territory.
At such inland area it is mysterious why small domain could build and maintain such a huge castle, but one reason might be that the hill itself was so suitable for castle and additional ground construction was not necessary. Nakagawa clan held Oka castle by the end of Edo era.
Subsequent to Meiji revolution, Oka castle was abolished and all buildngs was broken. Remaining stone walls was forsaken, and Rentaro Taki (1879-1903) who was a famous composer of Meiji era grew in Oita prefecture saw this scenery and composed “Kojyo no Tsuki” (moon seen from devastated ruin of castle) using the lyrics of Bausui Doi (1871-1952). Incidentally, it is said that Bansui made this lyrics from the image of Sendai castle (Miyagi prefecture). This song became popular song taught at school because of its melancholic melody.
Now the castle site is well prepared as a historical park and visited by many people. At the road just below the castle there was a special pavement, and when car passes this point, using the friction noise the melody of “Kojyo no Tsuki” is played. But regrettably most cars run too fast this point thus the melody hears uptempo totally different from the melancholic image of original song.
Anyway huge and magnificent castle consist of magnificent western area, secure central area with rising stone walls and traditional inner area is truly worth for visiting, even though being located at deep inland area of Kyushu island. There still remain many old houses in castle town, and Takeda town is also praised as a little Kyoto in Kyushu island.
25 minutes walk from JR Kyushu Houhi Honsen line Bungo Takeda station. 60 minutes drive via Route 10 and Route 57 from Oita city.
|
0.999431 |
What a sweet story! I would love to ask you a few more questions about home births. Would email be the best way?
Hi sweet friend!!!! I am SO sorry I missed this!! So good to hear from you!!
|
0.997641 |
Tralee is an easy self-drive day trip from Dingle Town in the West of Ireland. We spent four days on a road trip from Dublin to Dingle over the Paddy's Day weekend this year. A long weekend in Dingle allows plenty of time for the road trip to Tralee. How long is the drive between Dingle and Tralee? About an hour from point to point with no stops making it an ideal day trip distance. Read on to find out more about the sights and scenery between Dingle and Tralee in North County Kerry.
There are two roads into Dingle Town, the N86 and Conor Pass. We were warned not to tackle the Conor Pass on our way into Dingle after driving for nearly five hours from Dublin (granted, there were a few pit stops on the way...) However, Conor Pass is known as one of the most scenic drives in Ireland so we used our day trip from Dingle to Tralee as an excuse to check it out.
The weather was beautiful at sea level when we set out from Dingle Town after breakfast. I did notice a cloud layer looming near the peak of the surrounding mountains but thought nothing of it until we continued to climb the narrow winding road and soon found ourselves surrounded by fog on Conor Pass. The drive was harrowing for about 5-10 minutes since I really couldn't see more than a few feet in front of me. We emerged briefly from the fog near the peak and looked down into the valley. We couldn't see much but the lingering fog added a peaceful atmosphere to the place that we appreciated.
Driving on, we emerged from Conor Pass and coasted downhill toward Castlegregory. We pulled over to the side of the road and did a bit of beachcombing as we stretched our legs. We had the beach entirely to ourselves and spotted all sorts of cool shells washed up out of the sea.
Tralee is famous for the Rose of Tralee festival in which women from across the counties of Ireland (and those with Irish heritage from around the world) compete in a "beauty" pageant. The whole affair is quite non-traditional compared to what you see in the US and on the global stage: there is no swimsuit competition (thank goodness...). The winner is the person judged to best represent the attributes "lovely and fair". The competition is judged based on personality and the winner should be a good role model for the festival and for Ireland.
We spent some time walking around the park that hosts a list of winners and a statue of a dashing young man delivering a rose to a lovely and fair young lady.
Tralee is also home to the County Kerry Museum and covers a great deal of regional history. There is even a complete reproduction of a medieval street in the basement to give a sense of what life was like in Tralee hundreds of years ago.
Just outside of town, the Tralee Bay Wetlands offers a unique habitat for birding. The price of admission includes a short boat tour and afterwards guests can walk around and enjoy the hides dotting the property. We even spotted a few stonechats (they are super popular among birders). The reserve also looks after various injured birds. We got to see a Eurasian crane and whooper swan.
Just outside Tralee, Blennerville Windmill sits at the mouth of the River Lee where it meets Tralee Bay. The windmill dates back over 200 years and is the largest of its kind in Europe. There are walking trails along the river and bay making for a peaceful stroll.
On our return drive from Tralee to Dingle Town, we made a pit stop in Anascaul at the South Pole Inn. Tom Crean, the famous Irish Antarctic explorer lived out his days here. The pub features Tom Crean beer from the Dingle Brewing Company as well as a variety of photos and artifacts from Tom Crean's three Antarctic expeditions. It's a fascinating stop to stretch your legs on the return journey from Tralee.
The West of Ireland is all about scenery. We wound our way down and away from the hills toward water's edge and stopped again to stretch our legs at Inch Beach. I was so impressed by both the moody colors and textures on the beach. I think we would have had an entirely different experience and mood if the sun had been shining that day. The muted earth tones just felt right.
There is so much to see and do on the Dingle Peninsula and a day trip between Dingle Town and Tralee makes for an ideal drive. There was lots to see but the road trip was still manageable in the limited early Spring daylight hours that we had. Are there other places you'd add to a road trip on the Dingle Peninsula?
Learn about things to do on a day trip between Tralee and Dingle Town in the West of Ireland. Written by travel blogger, Jennifer (aka Dr. J) from Sidewalk Safari.
|
0.935755 |
Remind people to control their trash. Use a "deposit trash here " sign. Don't let litter happen.
• Keep safety first - a sign is a constant reminder that will always get noticed.
• A sign makes information accessible and reminds workers to be conscious of safety.
|
0.684452 |
Incorporated: 1901 as Delaware Guarantee and Trust Co.
Wilmington Trust Corporation is the holding company for the Wilmington Trust Company and its subsidiaries. It is the largest banking company in Delaware, with a market share of more than 40 percent in that state. Wilmington Trust has branches all across Delaware, several in the neighboring states of Pennsylvania and Maryland, and one branch in Palm Beach, Florida. The core of Wilmington Trust's business long has been personal trust management, and the bank ranks as the eighth largest nationwide in terms of the personal trust assets it manages. The bank also offers a full range of other banking and investment services. It makes business and consumer loans, manages institutional investments, and runs its own mutual funds group, called the Rodney Square Funds. Wilmington Trust also is one of the nation's largest retailers of precious metals. It sells and purchases metals and stores gold and silver bullion, bars and coins.
Wilmington Trust (under the name Delaware Guarantee and Trust Co.) was incorporated in Delaware in 1901 by members of the du Pont family. The du Ponts held one of the oldest and wealthiest U.S. manufacturing fortunes. Éleuthere Irenee du Pont de Nemours, his company's founder, emigrated from France to the United States in 1797 and subsequently built a gunpowder plant on the banks of Delaware's Brandywine River. Du Pont's company grew to be the largest industry in Wilmington and by the early 1900s was one of the largest corporations in the entire United States. Its assets at that time were believed to be worth around $24 million, a stupendous amount in the economy of that era. The Delaware Guarantee and Trust Co. was founded to handle the banking needs of the growing Du Pont company. The bank changed its name to Wilmington Trust Company in 1903. Wilmington Trust was deeply tied to the du Pont family and their company throughout its early years. Du Pont family members sat on Wilmington Trust's board, and they maintained million dollar checking accounts and even larger trust funds there. As a result Wilmington Trust, which otherwise might have been a typical small-town bank, ranked near the top of banks nationwide for assets.
Wilmington Trust extended its influence across the state of Delaware beginning in the 1940s, when it began acquiring smaller banks. It bought up the Union National Bank of Wilmington in 1943 and the Industrial Trust Co. of Wilmington in 1955. It bought up banks in the nearby towns of Newport, Claymont, and Newark between 1943 and 1949 and acquired three other area banks in 1959. It branched out into other businesses in the 1960s and 1970s, forming a subsidiary, the Brandywine Insurance Agency, Inc., in 1964 and acquiring a travel agency in 1974. The bank continued to hold massive amounts of du Pont family fortune, and the Du Pont company also did the bulk of its banking there. Trust handling was the bank's preeminent business. By 1969 Wilmington had the twelfth largest trust department in the United States, with trust assets worth $5.7 billion. Wilmington handled the fortunes of many famously wealthy clients, attracting them through national advertising. The bank stated its expertise in dealing with personal wealth in its publicity. By 1969 Wilmington Trust derived 18 percent of its total income from its trust department, a higher percentage even than the giant Morgan Guaranty Trust in New York, the nation's leader in trust assets.
Because of its unique position as the bank of such a wealthy family, there were ways in which Wilmington Trust did not operate like other banks. It derived very little of its income from loans, either to homeowners or to small businesses. The bank kept a larger percentage of its assets on hand than did other Delaware banks, because it needed money to cover the large demand accounts of its wealthy clients. Whereas in 1969 other Delaware banks kept only between eight and 15 percent of their total assets in cash and short-term notes, Wilmington Trust had 24 percent of its total assets on hand. This meant that there were millions of dollars Wilmington Trust was unable or unwilling to invest or loan out. Because of its need to have large amounts of cash available, it did not put as much of its money to work as did other banks.
More than half of Wilmington Trust's board of directors were du Pont family members in the early 1970s, and this also may have led to operations different from those typical at other banks. For example, in the spectacular bankruptcy of Lammot du Pont Copeland Jr. in 1970, some accused the bank's board of protecting its client with secrecy and not alerting other creditors to Copeland's looming financial disaster. Copeland Jr. ran a holding company, the Winthrop Lawrence Corporation, which amassed a small business empire in the 1960s. He or his company owned a string of California newspapers, a toy company, a van line, and college dormitories at one point. But towering debts led Copeland Jr. to declare bankruptcy in 1970, in one of the biggest personal bankruptcy cases ever up to that time. When Copeland Jr. defaulted on a $3.4 million loan from Wilmington Trust, the bank's judgment against him was carried out quite inconspicuously. Copeland Jr. was able to get a $1 million loan from a Swiss bank a month after Wilmington Trust published its judgment against him for default. Wilmington Trust also made little effort to collect the money owed it by Copeland Jr. Most of the loan, an amount of $3 million, had been guaranteed by his father, Lammot du Pont Copeland Sr., who happened to be a director of Wilmington Trust. This seemed a clear case of preferential treatment by the bank, because of family ties.
Wilmington Trust began to suffer from some of its policies, and by 1979 it was not doing well. Earnings were sinking, though the fact was masked in 1979 by profits from the sale of the bank's building. Return on assets was much lower than for its peer banks, and its loan-loss reserves were perilously low. A new president and CEO, Bernard J. Taylor, took over the bank in the summer of 1979, coming to Wilmington from a troubled Philadelphia bank. Taylor convinced Wilmington's board that their bank was in a grim situation, and he quickly embarked on a plan to save it. One element of Taylor's plan was to get Wilmington Trust out of bonds. More than half the bank's assets in 1979 were in 30-year bonds, which were low-yielding and had to be financed with short-term money that was priced every 30 or 60 days according to federal interest rates. Taylor sold off a third of the bank's bonds just months before the Federal Reserve began pushing interest rates up, a fortuitous move. He put the bank's money instead into short-term, high-yield investments. With the money gained from these investments, Taylor began to get the bank involved in commercial lending. This was an area in which Wilmington Trust traditionally did little. In 1978 only a fourth of its assets were in loans. But Delaware was undergoing a building boom, and Taylor determined to take advantage of it. He started the bank lending to small businesses and individuals, and this turned out to be both safe and profitable.
In just three years loans went from 26 percent of assets to 44 percent. And the low-yielding bonds, which had made up more than half of Wilmington Trust's assets when Taylor took over, by 1982 made up only 15 percent of the bank's assets. The attitude of management had changed as well. When Taylor first began working at Wilmington Trust, the bank had an aristocratic atmosphere. Bankers never took their suit coats off, even on the hottest summer days. CEO Taylor himself began appearing around the office in shirtsleeves, provoking alarm and then relief among his colleagues. This seemed to symbolize the bank's new direction. Wilmington Trust was ready to work hard to maintain its profits and was not as bound to patrician tradition. Taylor also staunchly maintained that du Pont family and corporate interests had no influence on bank policy. Wilmington's shares began to shoot up on Wall Street, as the company earned the moniker "the money management firm disguised as a bank" (according to a May 16, 1985 Wall Street Journal article). Wilmington Trust was deemed to be more than just a regular bank, and it apparently had many enthusiastic backers in investment circles.
The bank flourished in the 1980s under Taylor's direction. It continued to expand its loan program and held loans of $2 billion--more than two-thirds of its commercial assets--by 1989. Its share of the commercial loan market in Delaware doubled, from less than 20 percent at the start of the decade to almost 40 percent in 1989. In ten years the bank had gone from an ailing, tradition-bound institution to one of the most profitable banks in the nation. For the two benchmark measures, return on assets and return on equity, Wilmington Trust showed percentages almost double the average for other banks its size. Not only had commercial lending added to the bank's profitability, but its traditional business of handling trusts also had grown. By the end of the 1980s more than ten percent of individuals on Forbes magazine's list of the 400 richest people in the United States had their trusts at Wilmington.
After its amazing turnaround in the 1980s Wilmington Trust planned major expansion over the next decade. As the building boom in Delaware began to slow in 1990, most of the state's banks became unwilling or unable to make new construction loans. But Wilmington Trust was in such a sound financial condition that it continued to make these loans, and the bank picked up a good number of new customers. The bank planned to increase its market share by expansion as well. In 1991 the bank adopted its present holding company structure, with the Wilmington Trust Corporation holding the Wilmington Trust Company. This structure allowed it to meet regulations in the neighboring states of Maryland and Pennsylvania that would allow it to acquire banks there. But first Wilmington Trust turned its attention to acquiring small banks in its home state. In 1992 Wilmington Trust bought the Sussex Trust Company, a $400 million-asset bank with 20 branches in the southern part of Delaware. That part of the state was growing more quickly than the northern area around Wilmington. Soon after this purchase Wilmington Trust took over $45 million in deposits from a failed Westchester, Pennsylvania bank, the Bank of Brandywine Valley. Because Brandywine Valley's failure was deemed an emergency by Pennsylvania banking authorities, Wilmington Trust was allowed to operate the bank as a branch rather than as a subsidiary. This was contrary to usual practice, but Wilmington used this precedent to convince the state to let it open other branches in Pennsylvania. It soon had branches in Maryland as well.
Wilmington Trust also expanded its role as a precious metal dealer in the early 1990s. In late 1990 Wilmington Trust bought the precious metal program of New York-based Citibank. Citibank was a subsidiary of Citicorp, the nation's largest bank, and one of the largest banks in the world. Wilmington Trust had cleared the way for this transaction in 1987, when it became the first bank outside of New York City approved by the New York Metals Exchange to store gold and silver bullion. With the 1990 deal with Citibank, Wilmington Trust became one of the largest precious metals retailers in the United States. It built this business up even more over the next few years. In 1992 Wilmington Trust bought up the metals depository business of the Bank of Delaware, and in 1993 it acquired the retail sales and service business of Idaho's Sunshine Bullion Co.
Next the bank moved to expand its handling of mutual funds. Wilmington Trust had doubled its sales of mutual funds between 1989 and 1994, and it had several competitive advantages over other banks. Wilmington Trust was a state-chartered bank but not a member of the Federal Reserve system, and this outsider status allowed it to do what few other banks could, namely distribute its own mutual funds. Other banks were required to contract with another agent to distribute its funds, but Wilmington was able to manage all aspects of its mutual fund program, which it called the Rodney Square Funds. The bank traded on its expertise in the trust area and its reputation for handling the fortunes of markedly wealthy clients to build up its mutual fund business.
Wilmington Trust's mutual funds business stumbled in 1994, after taking some losses on risky investments. In July 1994 Standard and Poor's Corp. downgraded Wilmington's triple A rating to A, because the Rodney Square funds held 12 percent of its portfolio in structured and variable-rate notes. Standard and Poor's considered these notes not sufficiently stable, and perhaps they were right, as the Rodney Square fund subsequently lost nearly $4 million. The head of Wilmington's subsidiary Rodney Square Management Corp. resigned in 1995, and the bank reorganized the unit. But two years later Wilmington's money management business seemed to be thriving. The firm acquired a 24 percent stake in a New York money management firm, Clemente Capital Inc., in April 1996. Clemente previously had acted as sub-advisor for Wilmington's trust department, and it was well known for its wealthy clientele. Without revealing minimums, Clemente claimed its mutual funds were basically for people with hundreds of thousands of dollars to invest. This high-end business fit nicely with Wilmington Trust's expertise. And by purchasing a share in Clemente, Wilmington saved itself the sub-advisory fees it had been paying the firm to manage some $15 million for its clients.
In a similar move, Wilmington Trust next made arrangements with the New York investment firm Morgan Stanley Group and Florida's J.W. Charles Financial Services Inc. to generate referrals for its trust business. These two investment firms had hundreds of brokers in their sales forces, and Wilmington wanted to reach their broad client bases by paying for referrals. Then in late 1996 Wilmington Trust created a new structure for its mutual fund administration. The bank changed to a so-called master-feeder structure, where assets of several different mutual funds (the feeders) were managed centrally by a "master." This new structure gave the bank more flexibility in handling different funds and allowed it to convert some of its nonproprietary funds into proprietary ones.
All of these changes led to increasing earnings in the late 1990s. Wilmington Trust's fees from trust and asset management climbed, and its commercial loan department continued to be quite successful. The bank gained income from its newer ventures while continuing to grow in earnings from its core trust management business. At the end of the 1990s Wilmington Trust seemed in a very solid position. It had used its expertise in trusts to branch into the vibrant mutual fund market and was backed up by an excellent commercial loan portfolio. From a lackluster bank that dealt principally in trusts in 1979, Wilmington had become a powerhouse in financial services in the 1990s. Nevertheless, its growth had been well planned and relatively moderate. Aside from the mutual fund bump in 1994, Wilmington Trust had proved itself uncommonly successful in adapting to new markets and services. Its steady growth was predicted to continue over the coming years.
Principal Subsidiaries: Wilmington Trust Company; Wilmington Trust of Pennsylvania; Wilmington Trust FSB.
Bennett, Robert A., "Wilmington Trust: A Little Gem," United States Banker, July 1992, pp. 28-31.
Braitman, Ellen, "A Rave Review for Wilmington," American Banker, April 22, 1992, p. 6.
Crockett, Barton, "S&P Slashes Its Rating on Wilmington Trust Money Market Fund," American Banker, July 5, 1994, p. 1.
Forde, John P., "Delaware Bank's Strategy Is To Put Future in Trust," American Banker, June 27, 1985, pp. 3-5.
Fraser, Katharine, "Morgan Stanley, Broker, To Sell Wilmington Trust Services," American Banker, August 8, 1996, p. 12.
------, "Wilmington Trust Bolsters Fund Administration Business with New Structure, Clients," American Banker, October 9, 1996, p. 10.
Fraust, Bart, "Stocks Top Balanced Funds: Wilmington Trust Has Best Return in Index," American Banker, May 25, 1983, pp. 2-4.
Hensley, Scott, "Wilmington's Funds Chief To Call It Quits in Shake-Up," American Banker, December 27, 1995, p. 10.
Kapiloff, Howard, "Wilmington Takes Stake in a Money Manager," American Banker, April 17, 1996, p. 10.
Munford, Christopher, "Wilmington Trust Set To Acquire Citibank's Precious Metals Unit," American Metal Market, November 5, 1990, p. 7.
Newman, A. Joseph Jr., "Wilmington Trust of Delaware Comes In Out of Cold," American Banker, March 16, 1983, pp. 2-4.
Novack, Janet, "They Never Put on Jackets Again," Forbes, October 2, 1989, pp. 120-23.
Pare, Terence P., "Bankers Who Beat the Bust," Fortune, November 4, 1991, pp. 159-63.
Phelan, James, and Pozen, Robert, The Company State, New York: Grossman Publishers, 1973.
Piro, Dan, and Stiroh, Kevin, "Wilmington, Banc One: The Logic Behind Two High-Priced Deals," American Banker, June 19, 1991, pp. 13-14.
Rundle, Rhonda L., "Wilmington Trust Plans a Shift in Business, But Surge in Stock's Price Prompts Caution," Wall Street Journal, May 16, 1985, p. 63.
Talley, Karen, "Wilmington Trust Expanding in Mutual Funds," American Banker, January 26, 1994, p. 8.
"Wilmington To Buy Bullion Dealer," American Banker, July 27, 1993, p. 5.
|
0.993982 |
What is one of your favorite memories from a vacation you’ve taken? Vacations are the best! In March, my husband and I had the opportunity to go to Hawaii! We started planning for the trip about 4-6 months ahead of time. Sitting down together, we determined how much the vacation would cost so we could save up for it. Food can be an expensive part of a trip. If you able to make a lot of your own food, this can save hundreds of dollars on a vacation. Find out in advance what your hotel/lodging has available. For our trip, we found out a microwave, blender, and fridge would be available to us. Knowing our options for cooking allowed us to plan for sandwiches, smoothies, snacks for the road, and healthy food that could be heated in the microwave. Keeping your plans organized can be challenging, especially if you have lots of flights and travel details. There are stages of preparation that happen during the planning process to help keep you organized.
Financial planning/saving, and depending on the trip, you may need to plan even further in advance to save up. Make sure your passports are up to date if they are needed to get to your destination.
Purchase plane tickets: this time-frame depends on where you’re going. If you’re traveling within the United States this time frame works. If you’re traveling internationally, you’ll want to start looking for tickets as far in advance as 8-10 months. This is also a good time to reserve your lodging and decide on transportation. If it’s a popular vacation location and you plan to go at a peak tourist time, you may need to reserve lodging 6 to 12 months in advance. If you Google “Peak Tourist Time to [insert your destination here]”, you can research the best time for you to go.
Decide where you’d like to explore while at your destination, what activities you want to do, research food options, and find out what amenities your lodging offers. Gather all the details, like address, phone number, hours of service, cost, and when you’d like to do the activity. Then, write them in your digital vacation file.
Start your packing list and grocery list for any items you need to purchase before going on the trip. Also, start a list for any items that you anticipate needing to buy while you are on the trip.
Make sure all your travel plans and information are organized and in one location. This could look like a digital folder, document, or paper folder. Print off anything you may need a backup paper copy of. Running through the details will help make sure you don’t have to rush at last minute or end up forgetting an important address.
Relax and start preparing your mind and body to slow down. If you are running like crazy up until the moment you leave, you’ll need a day or two at the beginning of your vacation just to recover. If you take some time to prepare before, you can enjoy every moment. Check in for your flight, pack the car, and get some good rest!
Use Evernote or OneNote to capture all your notes and plans.
Make Your Lists – Here are some suggestions: flights, food, hotel, airports, emergency services, budget, places to see, car rentals, packing, & the weather.
Print your hotel, flight, and rental car confirmations as a backup for any mobile confirmations.
Check-In to the flights early on your airline’s mobile app. Typically, your mobile check-in process also serves as you boarding ticket.
Check your health insurance coverage early (if traveling to a different country).
Use Pinterest to collect ideas and do research about your destination.
Plan for fun – Schedule a rest day following your return date. This will give you time to unpack and get settled back in.
Don’t try to do everything. If you pick the places to see and things to do that are most important, you’ll be able to enjoy them. Running from one place to another may get a lot checked off your list, but it doesn’t make for restful experiences.
Thinking ahead for your vacation preparation alleviates the pressure of last minute chaos and missed items when you’re off on your adventure. If you start far enough in advance you won’t feel like you need a vacation from your vacation.
Question: What tips and tricks have you learned about vacation planning? Share in the comments below.
|
0.999999 |
Come up with an original product or service idea that can help solve a problem. The brand must be pitched to a parent company, with a clear and distinguished connection between the two.
The idea was to create a new sustainable and innovative toothbrush collection for the parent company, Tom's of Maine. The toothbrushes are designed so the handle is something you keep. The bristles are compostable and replaceable. Made from reused plastic from the ocean, every toothbrush is unique and beautiful.The stylish look of the brushes make people want to improve their oral hygiene.
|
0.964481 |
Exceedingly steadfast against the breath of the enemies of God.
Eustathios the divine Confessor lived during the reign of the first Christian emperor Constantine the Great (306-337). He was from Side in Pamphylia, as Jerome says in his "On Ecclesiastical Writers". Niketas says he was from Philippi in Macedonia. This Saint was a teacher, and sent by the wisdom of his words the rays of Orthodoxy throughout the ecumene. He was also present at the First Ecumenical Synod of Nicaea, which gathered in 325, keeping the dogma of piety and Orthodoxy, while rebuking and overturning the Arians. These mindless ones had introduced a cutting and division into the one nature of the Holy Trinity, calling the Son of God a creature, and dividing Him from the essence and honor and worthiness with His consubstantial Father.
Therefore through his divinely-inspired boldness, and through the zeal he had on behalf of the Orthodox faith, Eusebius of Nicomedia and Theognis of Nicaea and Eusebius of Caesarea, together with all those who were in communion with the Arian blasphemy, or we should say unbelief, slandered him. Thus under the pretense of going to Jerusalem, they went to Antioch, and gathered an assembly to depose the Saint. To make it appear as if they deposed him for a sensible reason, what did the cunning ones fabricate? They gave large gifts to a prostitute who had a newborn child, and they persuaded her to speak falsehood, saying that she conceived the child with the Saint. Therefore the prostitute came to the assembly with that child, and she slandered the Saint saying that by him did she receive and conceive the child. Those treacherous Hierarchs sought no other testimony, but only gave the oath to the woman and this was enough for them, so they immediately crafted the deposition of the Saint. And not only this, but they persuaded the emperor (namely Constantine the Great) to exile the Saint. Thus the blessed Eustathios went to Philippi through Thrace, and there his life came to an end.
* Before becoming Archbishop of Antioch he was the Bishop of Beroea (Aleppo) in Syria. At the First Ecumenical Synod he was elevated to Archbishop of Antioch. At the Synod of Antioch which deposed him, he was also accused of being a Sabellian. Upon his exile, he went to Traianopolis in Thrace in either 329 or 330. His deposition caused a schism in the Church of Antioch that was not healed until 414. Thirty years after his exile, in 360, he reposed in peace. John Chrysostom calls him a "Martyr", while Michael Syngellos (ca. 761-846) calls him "the foremost of the Fathers of Nicaea."
Thou didst shine like a brilliant sun in the First Synod, O righteous Eustathios, for thou didst proclaim the Son to be of one Essence with the Father and with the Spirit. Pray, O Hierarch of God, that unwavering steadfastness in the Faith be granted to all who honour thee.
Thou didst purify thyself with godly works, and become a pillar of the priesthood in divine vision and a blameless life. Thou didst withstand the onslaughts of temptations as a foundation and bulwark of the Church. And so we cry: Rejoice, O Father Eustathios.
|
0.98826 |
Compute the classical matching cumulative distribution function.
The cumulative distribution function is computed by summing the probability density function.
For sufficiently large values of k, the classical matching distribution can be accurately approximated with a Poisson distribution with = 1. Dataplot computes MATCDF from the above definition for values of k < 20. For values of k ≥ 20, Dataplot computes MATCDF using the Poisson cdf with = 1.
|
0.953893 |
The Angolan Football Federation (Portuguese: Federação Angolana de Futebol, FAF) is the governing body of football in Angola. It was founded in 1979, and affiliated to FIFA and to CAF in 1980. It organizes the national football league Girabola and the national team.
Angola's first appearance in the FIFA World Cup was in 2006; playing in Group D, losing only 1-0 to Portugal in their first match. Later that year, they successfully bid for the right to host the 2010 African Cup of Nations.
Angola at the FIFA website.
The 2018 Girabola was the 40th season of top-tier football in Angola. The season ran from 10 February to 2 September 2018.The league comprised 16 teams, the bottom three of which were relegated to the 2019 Provincial stages.
Primeiro de Agosto won their third title in a row, qualifying to the 2018–19 CAF Champions League.On an exceptional basis, on account of the Angola Cup not being contested this season, Petro de Luanda, the runner up, qualified to the 2018–19 CAF Confederation Cup.
The Angola national football team, nicknamed Palancas Negras (Giant sable antelopes), is the national team of Angola and is controlled by the Angolan Football Federation. Angola reached the 45th place in the FIFA Rankings in July 2002. Their greatest accomplishment was qualifying for the 2006 World Cup, as this was their first appearance on the World Cup finals stage.
The Angola national futsal team is controlled by the Angolan Football Federation, the governing body for futsal in Angola and represents the country in international futsal competitions.
The Angola national under-20 football team is the national under-20 football team of Angola and is controlled by the Angolan Football Federation. The team competes in the African U-20 Championship and the FIFA U-20 World Cup, which is held every four years.
Angola Olympic football team represents Angola in international football competitions in Olympic Games. The selection is limited to players under the age of 23, except during the Olympic Games where the use of three overage players is allowed. The team is controlled by the Angolan Football Federation.
The Angola women's national football team represents Angola in international women's football and it is controlled by the Angolan Football Federation. Their best place on the FIFA Rankings was the 82nd place, in December 2003. The only tournaments that they qualified were the 1995 and 2002 African Women's Championships, and their best finish was as Semi-Finalists in the 1995 tournament. Angola has, in contrast to many other African countries, has never suffered a heavy defeat. They have seldom lost by more than two goals.
Angola finished in third place at the African Championship in 1995. Angola also qualified for the Championship in 2002, where they beat Zimbabwe and South Africa, but lost to Cameroon by one goal. Since then, Angola have not qualified for the championships.
During qualification for the 2008 Olympics, Angola did not get any further than the first round, where they lost to Ghana. However, they did reach the final of the COSAFA Cup, where they met South Africa, who beat them 3–1.
Angola Provincial Stage (or Angolan Third Division) is the third division of Angolan football, and it is organized by the Angolan Football Federation.
Clube Desportivo 1º de Agosto is a multisports club from Luanda, Angola. The club, founded August 1, 1977, is attached to the Angolan armed forces, which is its main sponsor. Its main team competes in men's football, and its professional basketball team is also noteworthy within the club. The club's colors are red and black. The club won its first title in football, the Angolan League, in 1979. and in basketball in 1980. Handball and Volleyball have also won many titles to the club.
The Primeiro de Agosto Sports Club has its football team competing at the local level, in the events organized by the Angolan Football Federation, namely the Angolan National Football Championship a.k.a. Girabola, the Angola Cup and the Angola Super Cup as well as at continental level, at the annual competitions organized by the African Football Confederation (CAF), including the CAF Champions League and the CAF Confederation Cup.
Clube Desportivo Recreativo do Seles is an Angolan sports club from the village of Seles, in the southern province of Kwanza Sul.
The team currently plays in the Gira Angola. Because their home stadium (Campo da Mangueira) failed to meet standard requirements by the Angolan Football Federation, the team has been playing its home games at the Estádio Comandante Hoji Ya Henda, in the capital city, Sumbe.
Campo de São Paulo is a multi-use stadium located in Bairro dos Congolenses, Luanda, Angola. The stadium has a capacity of 5,000 people, and previously hosted Girabola (Angolan national football league) matches prior to falling into disuse and disrepair. In 2017, the Angolan Football Federation acquired the stadium with plans to convert it to a training center for Angola's national football teams.
Council of Southern Africa Football Associations (French: Conseil des Associations de Football en Afrique Australe; Portuguese: Conselho das Associações de Futebol da África Austral), officially abbreviated as COSAFA, is an association of the football playing nations in Southern Africa. It is affiliated to CAF.
COSAFA organise several tournaments in the Southern African region, and its most renowned tournament is the COSAFA Cup.
Estádio do Buraco is a football stadium in Lobito, Benguela Province, Angola.
It is owned by Académica Petróleos do Lobito and holds 5,000 people.
Gira Angola aka Segundona is the 2nd division of Angolan football (soccer). It is organized by the Angolan Football Federation and gives access to Angola's top tier football division Girabola.
Girabola, or Campeonato Nacional de Futebol em Séniores Masculinos, is the top division of Angolan football. It is organized by the Angolan Football Federation.The league winner and runner-up qualify for the CAF Champions League.
Jacinto Pereira (born December 10, 1974 in Luanda) is a retired Angolan football defender. He last played for ASA in the Girabola.
|
0.998273 |
Evaluate 0.00000000552 × 0.0000000006188 and express the answer in scientific notation. You may have to rewrite the original numbers in scientific notation first.
Evaluate 333,999,500,000 ÷ 0.00000000003396 and express the answer in scientific notation. You may need to rewrite the original numbers in scientific notation first.
Express the number 6.022 × 1023 in standard notation.
Express the number 6.626 × 10−34 in standard notation.
When powers of 10 are multiplied together, the powers are added together. For example, 102 × 103 = 102+3 = 105. With this in mind, can you evaluate (4.506 × 104) × (1.003 × 102) without entering scientific notation into your calculator?
When powers of 10 are divided into each other, the bottom exponent is subtracted from the top exponent. For example, 105/103 = 105−3 = 102. With this in mind, can you evaluate (8.552 × 106) ÷ (3.129 × 103) without entering scientific notation into your calculator?
Consider the quantity two dozen eggs. Is the number in this quantity “two” or “two dozen”? Justify your choice.
Consider the quantity two dozen eggs. Is the unit in this quantity “eggs” or “dozen eggs”? Justify your choice.
Fill in the blank: 1 km = ______________ μm.
Fill in the blank: 1 Ms = ______________ ns.
Fill in the blank: 1 cL = ______________ ML.
Fill in the blank: 1 mg = ______________ kg.
Express 67.3 km/h in meters/second.
Express 0.00444 m/s in kilometers/hour.
Using the idea that 1.602 km = 1.000 mi, convert a speed of 60.0 mi/h into kilometers/hour.
Using the idea that 1.602 km = 1.000 mi, convert a speed of 60.0 km/h into miles/hour.
Convert 52.09 km/h into meters/second.
Convert 2.155 m/s into kilometers/hour.
Use the formulas for converting degrees Fahrenheit into degrees Celsius to determine the relative size of the Fahrenheit degree over the Celsius degree.
Use the formulas for converting degrees Celsius into kelvins to determine the relative size of the Celsius degree over kelvins.
What is the mass of 12.67 L of mercury?
What is the mass of 0.663 m3 of air?
What is the volume of 2.884 kg of gold?
What is the volume of 40.99 kg of cork? Assume a density of 0.22 g/cm3.
The quantity is two; dozen is the unit.
One Fahrenheit degree is nine-fifths the size of a Celsius degree.
End-of-Chapter Material by Jessie A. Key is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
|
0.999998 |
The very well-known and the most popular Slovak football club - Slovan Bratislava was established shortly after the First World War and the emergence of Czechoslovak Republic (Czechoslovakia) on 3rd May 1919 as I. Czechoslovak sport club of Bratislava (I. Čs.Š.K.). Ten years after its founding, I. Čs.Š.K. Bratislava was clearly dominating in Slovakia and was often competing with strong opponents from Central Europe. Slovan Bratislava, with a sky-blue color of the jerseys started winning the most valuable trophies at the turn of the 20th and 30th years of the 20th century. Slovan was a national champion of Slovakia four times and received twice the title of the amateur champion of Czechoslovakia. What should be mentioned as well is a legendary victory 8:1 over professional football players from the English Newcastle United in May 1929 in Bratislava. Between the years 1939-1945, Slovan playing with a name SK Bratislava, won four championship titles and was clearly the best team in Slovakia.
After the Second World War and re-establishment of Czechoslovakia, SK Bratislava began to attack the dominance of the Czech clubs from Prague: Sparta and Slavia. In 1949 footballers wearing sky blue jerseys historically won the first title of the premier Czechoslovak league. The club won the title with a changed name again, this time as Sokol NV Bratislava. The Sokol NV Bratislava got the trophy for the best team in Czechoslovakia also in 1950 and 1951. The club won another national title, with a „new„ name: Slovan, in 1955 again. This successful winning era was followed by 13 difficult years, in which Slovan finished six seasons on a second place. Slovan was struggling to win the title despite the fact that his team signed the best Slovak footballer of the 20th century Jan Popluhár.
On 21 May 1969 Slovan Bratislava defeated FC Barcelona in the 1969 European Cup Winners Cup Final by a score of 3–2.
Finally, the glory of Slovan started to shine again in 1969, which is written in the history as the most successful year, when Slovan was entitled as a national champion again and won the most valuable trophy: UEFA Cup Winners Cup. Slovan won this title under the guidance of a head coach Michal Vican, who made up a great team, which was partially formed by younger players, players from Slovan youth categories and by more experienced players as well. These players were proud of their club and were willing to sacrifice a lot for it. There was a passion and love seen in football. Slovan qualified to final through Yugoslavian team FK Bor, FC Porto, AC Torino and Scottish Dunfermline Athletic FC. In the final, Slovan had to face a great opponent, a well-known CF Barcelona. The fight for the trophy for the winner of the UEFA Cup Winners Cup took place on 21st May 1969 at the stadium of St. James in the Swiss city of Basel. Michal Vican sent into this fight the heroes of the sky blue club: Alexander Vencel, Jozef Fillo, Vladimír Hrivnák, Alexander Horváth, Ján Zlocha, Ľudovít Cvetler, Jozef Čapkovič, Ivan Hrdlička, Karol Jokl, Ladislav Móder and Ján Čapkovič. Slovan won this difficult battle 3:2 and the goals were scored by Ľudovít Cvetler, Vladimír Hrivnák and Ján Čapkovič. After the final whistle of the referee a celebration started not only right on the pitch, but also in entire Slovakia. This evening has been the most memorable moment of the Czechoslovak football and is still remembered in the public.
The season of 1969/1970 brought another success and Slovan was a national champion again and was back on top of his glory.
Other championships titles came in seasons 1973/1974 and 1974/1975. The team was led by Jozef Vengloš, who was also a coach of a national team, which was formed by a huge amount of players from Slovan and were involved in winning the title: Champions of Europe in 1976.
After this incredible seasons, some unsuccessful period followed and Slovan was relegated into second division in the season of 1984/1985. Slovan was going through some difficult years, but in the early 90s Slovan showed his power again and entered into another incredible golden season with a new coach, Dušan Galis. Slovan was able to break a long lasting dominance of Sparta Praha in Czechoslovakia and after 17 years got a title of a Czechoslovak champion again.
On 1st January 1993, an independent Slovak Republic was formed and Slovan therefore played the national Slovak league. Slovan, as the best Slovak team won three national titles in a row and clearly dominated the Slovak competition. In 1996 the club received an invitation to participate on a tournament in Spain, where he won the Ciutat de Cartagena Trophy over FC Barcelona 2: 1. During the 1998/1999 season, Slovan won double, as the winner of the Slovak Club as well. Later on, in 2004, a poor economic situation of the club culminated in relegation to the second division again. After returning to the first division, a hard work and willingness to bring Slovan back to the light of glory paid for it and the first results could be seen and Slovan became a champion in 2009. In that year, due to some technical circumstances, Slovan Bratislava played his last game on its home stadium Tehelne pole and moved to nearby stadium Pasienky. In the 2010/2011 and 2012/2013 seasons, players of Slovan were crowded as the national champions and winners of the Slovak Cup as well. Players also succeed under the guidance of a head coach Vladimir Weiss in August 2011 and eliminated Italian AS Roma and reached the group stage of the UEFA Europa League. In Group F, Slovan was playing Paris Saint-Germain, Athletic Bilbao and Red Bull Salzburg. Another title of the national champion followed in the 2013/2104 season and Slovan has become the most successful club since 1993 as part of an independent Slovak league.
|
0.999919 |
Who is Bernie Madoff, and how did he pull off the biggest Ponzi scheme in history?These questions have fascinated people ever since the news broke about the respected New York financier who swindled his friends, relatives, and other investors out of $65 billion through a fraud that lasted for decades. Many have speculated about what might have happened or what must have happened, but no reporter has been able to get the full story-until now.In The Wizard of Lies, Diana B. Henriques of the New York Times-who has led the paper's coverage of the Madoff scandal since the day the story broke-has written the definitive book on the man and his scheme, drawing on unprecedented access and more than one hundred interviews with people at all levels and on all sides of the crime, including Madoff's first interviews for publication since his arrest. Henriques also provides vivid details from the various lawsuits, government investigations, and court filings that will explode the myths that have come to surround the story.A true-life financial thriller, The Wizard of Lies contrasts Madoff's remarkable rise on Wall Street, where he became one of the country's most trusted and respected traders, with dramatic scenes from his accelerating slide toward self-destruction. It is also the most complete account of the heartbreaking personal disasters and landmark legal battles triggered by Madoff's downfall-the suicides, business failures, fractured families, shuttered charities-and the clear lessons this timeless scandal offers to Washington, Wall Street, and Main Street.
These questions have fascinated people ever since the news broke about the respected New York financier who swindled his friends, relatives, and other investors out of $65 billion through a fraud that lasted for decades. Many have speculated about what might have happened or what must have happened, but no reporter has been able to get the full story—-until now.
In The Wizard of Lies, Diana B. Henriques of the New York Times—-who has led the paper's coverage of the Madoff scandal since the day the story broke—-has written the definitive book on the man and his scheme, drawing on unprecedented access and more than one hundred interviews with people at all levels and on all sides of the crime, including Madoff's first interviews for publication since his arrest. Henriques also provides vivid details from the various lawsuits, government investigations, and court filings that will explode the myths that have come to surround the story.
A true-life financial thriller, The Wizard of Lies contrasts Madoff's remarkable rise on Wall Street, where he became one of the country's most trusted and respected traders, with dramatic scenes from his accelerating slide toward self-destruction. It is also the most complete account of the heartbreaking personal disasters and landmark legal battles triggered by Madoff's downfall—-the suicides, business failures, fractured families, shuttered charities—-and the clear lessons this timeless scandal offers to Washington, Wall Street, and Main Street.
|
0.999173 |
First Edible Garden, installed by Green Coaches, will be at 3815 Palm Tree Blvd. Santiago De Choch, owner of Green Coaches, is looking forward to installing more of the vegetable gardens throughout the area. Stop by and have a look. For information, call 839-1239.
Spanish language story about an Argentinian cycling around the world.
Pablo García no es un bonaerense cualquiera. En el 2001, a sus 27 años, decidió que quería recorrer el mundo y sin dudarlo demasiado abandonó su trabajo en una agencia de turismo y se despidió de su vida en Buenos Aires para emprender una aventura fantástica y casi literaria.
Desde el comienzo de su viaje atravesó con su bicicleta -de 70 kilos- 61 países, las enseñas de los cuales porta con orgullo en dos mástiles que acopló a su vehículo.
La última banderita que colgó, situada en la posición más elevada, corresponde a Pakistán, país que ahora visita y desde el que próximamente cruzará a la India para proseguir con su itinerario por Asia.
En estos años, García vivió situaciones muy complejas como enfrentarse a elefantes, perderse en el desierto, recorrer 150 kilómetros en plena plaga de moscas tsé-tsé o un ataque a punta de machete de miembros de una tribu africana, según relató.
Hoy, con más de 72.000 kilómetros en las piernas, Pablo partió desde Islamabad hacia la ciudad paquistaní de Lahore para continuar con su travesía por el mundo.
"Voy a continuar hasta haber completado toda la vuelta (al mundo). Me motiva seguir experimentando cosas nuevas", expresó el argentino en conversación con la agencia de noticias EFE.
Cuando llegue a la India, el aventurero se reunirá con su novia, una joven italiana, con quien pedaleará durante unos meses para seguir acercándose a su objetivo de conquistar el globo. Para ello, estima que le restan aún unos 50.000 kilómetros y cinco años de esfuerzo.
Toni Ferrell and Bob Hale led a sizable number of bicycle enthusiasts in a "Ride of Silence", to pay tribute to cyclists killed or injured on the roads. The News-Press has a story and slideshow here.
Anyone who has faced the dilemmas of sidewalks that end suddenly, distances almost impossible to negotiate without a motor vehicle, and dangerous bicycle riding conditions in Lee County, knows that urgent action is needed.
Last Friday, May 15th, National Bike to Work Day, at the Old Lee County Courthouse, Dan Moser of BikeWalkLee, a coalition working to complete the streets in Lee County, presented commissioner Judah with a petition, endorsed by over 800 residents, for the County to work towards making our streets and roads safer for pedestrians and bicycle riders.
Dan Moser is with the Lee County Health Department in the Injury and Prevention Program as the Bike and Pedestrian Program Coordinator. He is also active at the Florida Bicycle Association. Mr. Moser has been an advocate for more walking and bike-friendly communities for a number of years, and we have a debt of gratitude with him for his tireless efforts. His, however, is just one voice, and we need more citizen involvement and grassroots action to present an alternative to the old tired ideas of the "growth-at-all-costs" crowd: smart, compact, walkable communities, better transit, and real alternatives to just driving everywhere.
In his comments, Commissioner Judah expressed support for the efforts of the group. "You are the mainstream", he told the crowd, as the trend, both nationally and worldwide, is towards a more rational use of energy through better urban planning and use of alternative transportation. He issued a quick recap of things that have been accomplished in recent years, but recognized that much remains to be done.
Every time I see yet another lane being added to the highway, another overpass, or another gas station being built, I can't help but think that in many communities, both in the first and the third world, you can leave your home in the morning riding your bike, get to the train station and onto the train with it, and reach pretty much any destination in a short amount of time. There's a lot of places that have figured out that buying some bread and milk, getting a haircut or taking the kids to school are chores that don't necessarily have to involve driving a car - your own two feet are enough. If they can do it, I know we can do it. In the meantime, let's support the efforts of people like Dan Moser, Toni Ferrell, Darla Letourneau and everybody at BikeWalkLee, to make our own Lee County more bike and pedestrian-friendly.
|
0.999915 |
Hey! I am making a god of war comic video for my youtube channel (Blue Pixel Productions). It is meant to be funny with a dark twist at the end and should be only a few seconds long. We will need two voice actors, one for Kratos and the other for Atreus. If you don't know about the new God of War game, here is a short summary: Many years have passed since Kratos took his vengeance against the Olympian Gods, and he now lives with his young son Atreus in Midgard. The game opens following the death of Kratos' second wife and Atreus' mother, Faye. Her last wish was for her ashes to be spread at the highest peak of the nine realms. Kratos and Atreus set out on their journey. We also need an artist to draw the art for the comic (just the characters and the background). Voice actors must have a high quality mic with little to no background noise. Artist need to be able to either draw digitally or on paper. Good luck!
After destroying Olympus and siring another child in Norway, Kratos becomes a more stoic and contemplative character, only bursting out in anger when antagonized or threatened. Although he is sometimes prone to outbursts when disciplining his son, he almost always manages to regain control of himself before doing any damage. He also accepts full responsibility for his actions in Greece, often exhibiting extreme sadness and regret, and at times even falling into a state of depression, when confronted with his past behavior. Kratos initially tries to hide his past from Atreus, both out of fear that he will disown him for it, and fear that he will try to imitate his actions.
"Hope is for the weak!"
Atreus is a friendly, curious child who is kind to others when he engages in conversation with them, believing that they should help people whether they be living or dead.
I need someone who can draw the characters and background. Please audition one example of your work or a link to a page with your art.
|
0.999997 |
This pasta dish serves 4-6.
1. To make the meatballs: combine the bread crumbs, milk and cream together in a medium bowl and let stand until the crumbs are softened by the milk, about 3 minutes. Add the Romano, eggs, parsley, salt, thyme, sage, garlic powder, and black pepper, and mix well. Add the veal and pork. Using your hands, mix the ingredients together just until combined--do not over mix or the meatballs will be heavy. Refrigerate for 15 to 30 minutes so the mixture can firm up a bit.
2. Using wet hands rinsed under cold water, and scooping about 2 tablespoons of the meat mixture for each meatball, divide and shape the meat mixture into 24 equal meatballs (you can use a food portion scoop, if you wish) and place them on a platter or baking sheet. Loosely cover the meatballs with plastic wrap and refrigerate for 20 minutes to 2 hours.
3. Meanwhile, start the sauce: Heat the oil in a large heavy-bottomed saucepan over medium heat. Add the onion and cook, stirring occasionally, until softened, about 3 minutes. Stir in the garlic and cook until it is fragrant, about 1 minute. Add the wine and bring to a boil. Stir in the tomatoes, water, oregano, red pepper flakes, and bay leaf. Bring to a simmer over medium-high heat, stirring often. Reduce the heat to medium-low and simmer, stirring occasionally, for about 20 minutes.
4. Heat the oil in a large heavy skillet over medium heat. Working in batches without crowding, add the meatballs and cook, turning occasionally and adding more oil to the skillet as needed, until they are browned, 6 to 8 minutes (Wait for a crust to form on the underside of the meatballs before turning them.) Using a slotted spoon, transfer the meatballs to a plate. Pour off the fat in the skillet. Add about ½ cup water to the skillet and bring to a boil, scraping up the browned bits from the bottom with a wooden spoon. Stir the deglazed mixture into the sauce.
5. Carefully add the meatballs to the sauce, making sure they are submerged in the sauce. (Add more hot water to the sauce if needed.) Adjust the heat so the sauce is simmering and partially cover the saucepan to keep the sauce from reducing too quickly. Cook, occasionally stirring to avoid scorching, until the meatballs are cooked through and the sauce has thickened slightly, about 45 minutes. During the last 10 minutes, stir in the basil.
6. Meanwhile, bring a large pot of salted water to a boil over high heat. Add the spaghetti and cook, according to the package directions, according to the package directions, until al dente. Drain well and return the spaghetti to the cooking pot.
7. Using a slotted spoon, transfer the meatballs to a serving platter. Remove and discard the bay leaf from the sauce. Stir about 3 cups of the sauce into the spaghetti. Transfer the spaghetti to a serving bowl and top with the remaining sauce. Serve immediately with the meatballs, and the Romano passed on the side.
Substitute regular ground turkey (93 percent lean, not extra-lean ground turkey breast) for the veal.
Many fine cooks swear by this method, which skips the browning step and ensures tender, melt in your mouth meatballs. After the sauce has simmered for about 20 minutes, one at a time, drop the raw meatballs into the simmering sauce, letting each meatball cook for about 15 seconds to firm slightly before adding another. Be careful when moving the meatballs while making room for more in the saucepan to avoid breaking them. Once the meatballs have been added, adjust the heat so the sauce is simmering, and cook as directed.
|
0.999983 |
A user has reported that their laptop is not charging and often turns off, even when the charger is plugged in. Which of the following is most likely the problem?
If the battery is not charging and the laptop powers off when plugged in, it is most likely an issue with the power cord or DC jack. Remember: Laptops do not convert AC->DC. The power is converted by the power cord before reaching the laptop's DC jack.
|
0.934439 |
How about a "real" button on the iPhone 7?
I've had one of the newer MacBook Pro's for about 6-months now and those laptops have a new trackpad that is no longer a physical button. The trackpads on these laptops are just a non-moving slightly recessed piece of aluminum (or Al-U-MEN-E-UM as Jonny Ive likes so say). But these track pads have a bit of magic to them. When the MacBook Pro is powered on and you press down on the trackpad you would swear the trackpad moved down and "clicks" just like the old mechanical trackpads would do. I tried this with my kids and they all thought I was messing with them. But then I shut the MacBook Pro off and had them press the trackpad and sure enough it didn't move. This "fake click" is called haptic feedback. It is essentially a very short and sharp vibration that is triggered each time you press down on the trackpad to mimic the old mechanical action of the trackpad...and it is extremely convincing.
Fast forward to the iPhone 7. One of the changes to the iPhone this time around was that they changed the home button from the physical button that it has always been to just a slightly recessed pressure sensitive circular area...it is no longer a physical button. Apple did this for a couple of reasons. The main reason was so that they could make the phone more "waterproof." The other reason is that physical buttons wear out over time, so this is just one less thing that people need to worry about breaking on their iPhone. Believe it or not this is actually a "thing" outside of the U.S. So much so, in fact, that people in other countries would use some of Apple's accessibility features as a way to avoid pressing the home button on their iPhones as a way to ensure the button would not break.
So what's it like to actually use the new iPhone 7 now that Apple has killed the physical home button? Well...its different. If you handed someone the iPhone 7 and they had never heard of an iPhone or touched one before (maybe someone from Mars?) they might mistake the home button for just a very stiff non-responsive physical button, but I doubt it. The new iPhone 7 home button is no MacBook Pro trackpad. It is very obviously not a button anymore. That being said, if you put aside the initial negative reaction to the change there are some positives. For one, the response from pressing the home button is very crisp and is exactly the same every time. With some of my past iPhones the button would sometimes not feel the same every time I pressed it. I would certainly feel the difference between my iPhone and someone else's. It always gave me the impression that the button was flimsy or changing over time, even though through the 9 years of using various iPhones I have NEVER had a home button fail on me. I also really appreciate having an iPhone that is more waterproof. I live in Florida and it is not unheard of to get caught in a brief downpour that comes out of nowhere. If you are out on a walk when this happens with your iPhone it could mean the death of your $1000 phone. That is no longer a concern anymore.
It doesn't feel like a button. The haptic feedback just isn't very convincing. Maybe Apple will be able to improve this over time but they aren't fooling anyone right now.
When your iPhone is powered off or the operating system is not responding the home "button" doesn't work. In fact, in order to do a hard restart of your iPhone you now have to press the on/off button and the volume down button at the same time (this used to be the on/off button and the home button). Not a big deal, but if you don't know the new button combination and you need to restart your phone it can be a problem.
Overall, I really like the change. I like the firm and crisp response I get from having a non-physical button. But I realize this is not for everyone. I have more of an industrial taste with things...sleek and streamlined. So I like eliminating a mechanical item on my iPhone that has the potential to break or allow water into my device. The haptic response could definitely use some improvement, but it does the job and I suspect I and many other people will get used to it over time. I also think Apple will improve the haptic response feel (with some improvement simply from a software upgrade). In a few years people won't even remember a time when Apple used physical buttons on their iPhones. But make no mistake, this is big change from having a physical button. For many people this is going to seem like a downgrade. If you think you may be one of those people I highly suggest you try out an iPhone 7 in a store or play with a friend's phone before purchasing one yourself. The iPhone is a very personal device. As an iPhone user you will probably end up pressing that home "button" hundreds of times a day and if that becomes a negative experience for you it might be wise to delay switching to the new button design until Apple is able to improve the haptic feedback and make it feel more like a physical button. For the rest of you, enjoy the crisp non-mechanical response of the new button and the improved waterproof characteristics of your iPhone 7.
|
0.972162 |
When is the next solar eclipse in Iraq?
The following table listens all solar eclipses, whose path is crossing Iraq. It is crucial that the eclipse path touches the country. Solar eclipses which can only be seen partital are not considdered in the table below. With other words: The following table shows all solar eclipses, whose totallity or annularity can be seen in Iraq.
The next partial solar eclipse in Iraq is in 425 days on Sunday, 06/21/2020.
The next total solar eclipse in Iraq is in 22779 days on Wednesday, 09/03/2081.
The next annular solar eclipse in Iraq is in 27120 days on Thursday, 07/23/2093.
|
0.918346 |
Is it harder to get to the HN front page now than it used to be? This is a very difficult question to answer. Are points on HN worth less today than they used to be? This is easier to answer yes.
Here's a graph showing the median score of HN front page submissions over time. The dots represent the median score of a front-page submission in a given month, with the orange line (6-month rolling average) showing the general trend. The grey area represents the upper and lower quartiles of front-page scores.
The median today, in 2018, is around 150 points double what it was when I joined the site in 2011. With a bit of hand-waving, we might be able to claim that "HN points are worth half as much in 2018 as they were in 2011".
I'm not sure what's caused these patterns it's hard to find good, easily-accessible proxies for Hacker News visitor counts and user counts Google Trends doesn't shed much light into these things.
|
0.999102 |
A month ago went on sale the iPhone 5 in the first wave of countries and a week later, the new smartphone from Apple saw the light in other territories such as Spain. However, today it is still almost impossible to get one of the terminals, as only units available for sale.
Initially Sharp - the manufacturer of the phone screen-admitted that he had trouble meeting the deadlines imposed by Apple, but two days later confirmed that they were going to meet the deadlines.
So, what should be the problem with the lack of stock of iPhone 5 in countries like Spain? Foxconn takes responsibility. According to an executive of the factory, "among all the devices that have passed through Foxconn, the iPhone 5 has been, without doubt, the most difficult to assemble." Why? Because the phone is too small and light, making it difficult manufacturing tasks.
This same executive has said that as time passes, employees are becoming more skilled at making the terminal and, therefore, should begin to meet demand soon.
Article The lack of stock of iPhone 5 is due to its complex manufacturing process was originally published in News iPhone .
|
0.957446 |
When you have a cycloalkane and you have more than one substituent, how do you number when the all the corners of the molecule have a substituent. Is it alphabetical order?
It should be alphabetical with lowest number.
Do the numbers need to be lowest according to alphabetical order or is the alphabetical order only relevant when arranging the name of the molecule?
If there are two substituents on the cycloalkane then the substituent with the earlier name alphabetically is given the first number.
If there are more than two substituents, then alphabetical naming is irrelevant to the numbering (only to the name) and the numbers are arranged so that the possible name with the lowest given numbers is chosen. The "in-between" numbers should be the lowest possible.
It is alphabetical, and whichever combination of numbers allows for the lowest numbers is the correct name.
|
0.933535 |
Romer Zane Grey (October 1, 1909 – March 8, 1976) was the eldest son of novelist Zane Grey. Romer was born October 1, 1909 at Lackawaxen, Penn. Zane and Dolly Grey had three children: Romer, Betty, and Loren. Romer was named after an uncle Romer Carl Grey, known as Reddy Grey. In his youth Romer was very much "a chip off the old block." He went on a number of his father's expeditions in to the wild areas and also on many of his fishing trips. Romer was very much into hunting, shooting, and fishing. See for example, Zane Grey's "Book of Camps and Trails" and Romer's own two fishing books listed below.
It was Romer who suggested to his father the idea for the novel Western Union, and it was Romer who did much of the research that went into the book. In addition, it was Romer who wrote the so-called "Big Little Books," although they bore his father's name. He also wrote stories for comic strip King of the Royal Mounted and the film serial King of the Mounties. He also developed the comic strip "Tex Thorne".
The WWA dedicated the January, 1972 issue of The Roundup as a "Zane Grey Centennial Issue" and printed an article by Romer Grey about his dad's methods of research and writing.
Many years later, a discovery in a basement yielded the remnants of the Grey animation studio; one that hired many "A-list" animators at the time from the studios of Disney and Looney Tunes. Despite many sketches, cels, and drawings in the basement find, there was not a foot of film that was intact. Records indicated that Arabian Nightmare and Hot-Toe Mollie were, however, ready to be filmed before the studio shut down.
In 2013, a 35mm print of Hot-Toe Mollie surfaced elsewhere, showing that at least one short made it to film.
Romer married his first wife Dorothy Olson of Glendale, CA, when she was aged nineteen years, in 1930. They had one child but divorced a few years later. He had three subsequent marriages. Romer Zane Grey was a pilot in the Pacific during World War II, and was president of Zane Grey, Inc., a business which he operated out of his home on El Nido St. in Pasadena, Calif. He was the author of two books on fishing (his father's favourite sport), and he wrote numerous short stories and articles. His article, "From Purple Ink to Purple Sage," was a highlight of a special "Zane Grey Centennial Issue" of The Roundup in January, 1972. An article in the September, 1976 issue of The Roundup relates the findings of a story in the Pasadena News that Romer Grey was a virtual recluse and that he always found it hard to live in the shadow of his famous father. Romer Zane Grey suffered a stroke on February 23 and was admitted to the Huntington Hospital in Pasadena, Calif., where he died on March 8, 1976, at the age of 65 leaving behind his third and last Wife Octave B. Grey, his son Romer Grey Jr and five grandchildren. Also survived by a younger brother, Loren Grey, a professor of psychology at Valley State College in California; and by a sister, Betty Zane Grosso.
Romer wrote a number of western novels and in some cases re-used the characters created by his father.
"The Cruise of 'The Fisherman'." “The Fisherman” was a large boat owed by Zane Grey in which he cruised to many parts of the world for fishing.
^ Hutchison, Don (1998). The Scarlet Riders: Action-Packed Mountie Stories from the Fabulous Pulps. Mosaic Press. p. xiii. ISBN 9780889626478.
^ Hayes, R.M. (2000). The Republic Chapterplays: A Complete Filmography of the Serials Released by Republic Pictures Corporation, 1934-1955. McFarland. p. 66. ISBN 9780786409341.
^ F. Lowery, Lawrence (2007). The Golden Age of Big Little Books. Educational Research and Applications LLC. pp. 130–131, 234. ISBN 9780976272489.
Lina Elise, Roth; Grey, Zane (2011). Dolly and Zane Grey: Letters from a Marriage. University of Nevada Press. ISBN 978-0874178623.
|
0.999993 |
What is clean eating? There are a variety of clean eating experts out there, there are different opinions. But one thing that we can all agree on is processed.
I see processed food as you take a single food and it falls on a continuum. On the far left, you have an apple you’ve plucked from a tree. There’s not processing going on there. But then the farther down you go, maybe the skin is removed, maybe it’s juiced, maybe the fiber is removed, that food travels farther down the processed food continuum into maybe an applesauce or a fruit leather.
Typically, the less processed an item, the more naturally occurring nutrients it contains, whether that be fiber or Vitamin C. Processing means you can end up with less nutritious foods.
You can take this a step farther, which I do, which is looking for food free of artificial colorings, artificial sweeteners, free of preservatives. There is mounting evidence that some of these additives may be detrimental to our healthy, though more research needs to be done.
The second part of clean eating is choosing local, seasonal foods when possible. For example, when it’s June, this is the time of year you want to opt for strawberries and stone fruit because they are domestically grown. Choose produce that is grown here in the United States, and ideally chose those closer to home. Compare that to eating butternut squash right now during the summer, or asparagus in January. It’s typically traveling a long way from South America.
Finally, we want to choose a colorful array of foods. Eating a rainbow of foods means choosing a lot of fruits and vegetables. That really should be the foundation of your diet, supplemented with whole grains, dairy or dairy alternatives and protein. Animal protein can come from sources such as chicken, pork or beef. Plant based protein can come from legumes, which I am certainly an advocate of, as there are many benefits to going meatless.
|
0.995126 |
Climate change is neither a new concept nor one without evidentiary basis. Thankfully the aforementioned statement is receiving less critique and speculation than in the past. Nevertheless climate change is still being referred to as ‘the faceless villain’, due in large part to the predominant concern with respect to the environmental foundation of the phenomenon and the unfounded belief that there are no true victims. And so I pose the question: if faceless, whom do the many victims of the disasters fuelled by climate change hold responsible?
When referring to climate change crises there are two broadly discussed categories, natural and conflict related. It is readily recognised that natural disasters have increased in incidence with the ever-worsening effects of global warming1, 2. As a result of these disasters, there has been a substantial increase in the amount of people displaced from their homes3. Despite this fact it is only recently that recognition of such people as refugees has been developed. Environmental refugees have been described as those experiencing transformation to unsuitable environment for occupation4. Estimates suggest that there will be over 200 million environmental refugees by 2050 as a result of the impacts of climate change5. To put that into context, currently more than 550 million people experience chronic shortages of water6 and 135 million people are endangered by desertification7. Both a lack of water and the degradation of ecosystems pose serious threats to human health that causes unsuitable living conditions. Despite the improved recognition of climate change as a whole, the United Nations has yet to recognise those forced to migrate as a result of environmental change as having official refugee status8. Like many situations the line becomes blurred when climate change maybe a precipitating factor in conflict such as in the case of the war in Darfur9, and is now being speculated as a causative agent in the Syrian Civil War10.
The health impacts of climate change have been categorised as primary, secondary and tertiary effects. Socioeconomic disruptions and the subsequent physical and mental health issues that result from climate change are referred to as tertiary impacts of climate health11. The Intergovernmental Panel on Climate Change (IPCC) has proposed several links between climate change conflicts. A critical explanation is founded on the basis of resource depletion that then furthers economic instability that may result in large-scale violence12. Another major theory explored by the Stern Review was with respect to forced migration as a result of climate change13. Although evidence supporting these theories is somewhat limited at this stage, there is data demonstrating drought conflicts in low-income regions have been increasing in incidence, which can be exacerbated by climate change14.
As an Australian citizen but foremost as an immigrant, I am proud of Australia as the country I grew up in and am grateful for all the many opportunities with respect to education, living situations and health. However I am shocked and even horrified with Australia’s current position on critical issues, including climate change and refugee policy. The Migration Act deems that without appropriate visas and documentation regardless of the reasoning, asylum seekers must be held in detention until either deportation or granted a visa with no set timeframe limitation15. In two of Australia’s foremost immigration detention centres, Nauru and Manus Island, the latter of which has recently been set for closure, living conditions have been described as inhumane15-20 with the United Nations High Commissioner for Refugees calling for the immediate transfer of detainees21. Furthermore Australia is one of the world’s highest per-capita emitters22 and climate change policies are often criticised not only as lacking ambition but also as disastrous23. The SUVA Declaration24 that outlines the Pacific Islands concern regarding their future as a result of the impacts of climate change and their call for greater action on the international scale, with a major feature including no new coalmines. A critical situation that drove the formation of this declaration was the current climate regarding Kiribati Island, much like other low-lying island nations, and the rising sea levels causing sinking25. In a statement made at the 70th Session of the United Nations General Assembly by the Prime Minister of Fiji, Commodore Bainimarama outlined plans already being employed to relocate low lying villages already experiencing some of the most severe consequences of climate change. Whilst imploring for further action against climate change in the spirit of peace and humanity, Bainimarama likened the prospective resettlement of the South Pacific Island States to that of “the fleeing conflict in Syria and Iraq”26. Despite being a neighbour to many of the signing countries, including the Cook Islands, Kiribati and Nauru, Australia has rebuffed adoption of the historic declaration.
The current situation regarding climate change is tenuous with political disagreement on action rampant. The 21st Conference of Parties on Climate Change aimed to provide a thorough international agreement on climate change mitigation and adaptation in order to keep global warming below 2oC above pre-industrial levels. Australia not only has an intended nationally determined contribution (INDC) that has been deemed inadequate in reaching the intended target27 but also was ranked third-last in the Climate Change Performance Index28. Moreover independent analysis by the Climate Action Tracker has found that current implemented Australian policy projections would see emissions rising far beyond the 2030 INDC target27. As future health professionals that will be treating those experiencing the effects of climate change, it is our responsibility to be the voice of its seemingly faceless victims.
Sauerborn R, Ebi K. Climate change and natural disasters: integrating science and practice to protect health. Glob. Health Action. 2012;5:1–7.
Moore T. Global warming The good, the bad, the ugly and the efficient. EMBO Rep. 2008; 9(1):41-45.
Hunter L. Migration and Environmental Hazards. Popul Environ. 2011; 26(4):273-302.
El-Hinnawi E. Environmental Refugees. Nairobi: United Nations Environment Programme; 1985.
Myers N. Environmental refugees: a growing phenomenon of the 21st century. Philos Trans R Soc Lond B Biol Sci. 2002;357(1420):609–613.
Gleick P. The world’s water 2000–2001: biennial report on freshwater resources. Washington, D.C., Island Press; 2000.
UNDP. Human Development Report 2000. United Nations Development Program. New York: Oxford University Press; 2000.
UN General Assembly. Convention and Protocol Relating to the Status of Refugees. Geneva: UNHCR; 1951.
Burke M, Miguel E, Satyanath S, Dykema J, Lobell D. Warming increases the risk of civil war in Africa. Proc Natl Acad Sci USA. 2009; 106(49):20670-4.
Gleick P. Water, Drought, Climate Change, and Conflict in Syria. Am Metorol Soc. 2014;6:331-340.
Butler C, Harley D. Primary, secondary and tertiary effects of eco-climatic change: the medical response. Postgrad Med J. 2010; 86:230-234.
McCarthy J, Canziani O, Leary N, Dokken D, White K, editors. Climate Change 2001: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge Univ Press; 2001.
Stern N. The economics of climate change. Stern review. London: HM Treasury; 2006. Available at: http://www.hm-treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/stern_review_report.cfm.
Nyong A, Fiki C, McLeman R. Drought-related conflicts, management and resolution in the West African Sahel: considerations for climate change research. Erde. 2006;137(3):223-248.
Sanggaran J, Ferguson G, Haire B. Ethical challenges for doctors working in immigration detention. Med J Aust. 2014; 201(7): 377–378.
Suva Declaration on climate change. Suva: Pacific Island Development Forum Secretariat, 2015. Available from: http://pacificidf.org/wp-content/uploads/2013/06/PACIFIC-ISLAND-DEVELOPMENT-FORUM-SUVA-DECLARATION-ON-CLIMATE-CHANGE.v2.pdf [cited 2016 Jun 13].
Schmidt C. Keeping Afloat: A Strategy for Small Island Nations. Environ Health Perspect. 2005; 113(9):606-609.
climate change, emissions, greenhouse gas, health, impact, pacific islands, united nations. Bookmark.
|
0.999999 |
How long would it take someone to die of hypothermia if they were trapped in a walk-in freezer?
Hypothermia is a condition where your body core temperature is too low to sustain your health. Hypothermia is a process of body heat loss, and sometimes a rapid one, that progressively debilitates your physical and mental abilities. Any temperature less than 98.6 degrees could cause it, but usually it occurs at temperatures that we consider "cold" (50 degrees F and less).
Most cases of hypothermia tend occur at temperatures from 30 degrees F to 50 degrees F. However, environmental and physical conditions such as wet, wind, and exhaustion can cause hypothermia at higher temperatures and aggravate the severity of the condition at lower temperatures. Hypothermia can range from a level of discomfort to death and should never be ignored.
Prevention is doubly important since hypothermia can quickly start to affect your brain's capacity to think straight. When heat loss threatens your body temperature's equilibrium, your body's demand to protect its vital core can result in as much as a 99% decrease in blood flow to the toes and fingers. Ultimately, your body will decide to shut your brain down to a state of unconsciousness in order to keep vital heart functions.
|
0.999984 |
How to be a boss and a friend?
We all know one way of motivating staff is building friendships. So how does one be a boss and friend? First up, treat everyone as equally as possible. Next, be honest and direct as you would with a friend. Ensure that all voices are heard, remembering it’s a manager’s job to bring out the best in every team member. Lastly, try to avoid connecting with staff on social media, it allows them some privacy and will keep your relationship on a friendly, professional level.
|
0.999972 |
If you are planning on swimming for exercise, you should warm up before you begin your workout.
– Increases blood flow to the muscles in your body.
– Increases your heart rate to prepare your body for exercise.
– Decreases stiffness in your joints.
– Increases range of motion of your shoulders and legs.
After your swimming workout, you should perform stretching exercises to prevent muscles soreness.
|
0.999514 |
Travelers who are lucky to enjoy a visit to the best beaches in southeast Asia should make sure not to miss a vacation highlight: a luxurious beach brunch. Enjoying a glamorous brunch while taking in beautiful oceanside views is one of the most popular things to do in southeast Asia.
Potato Head Beach Club is one of the best beach restaurants in Bali. The restaurant has the chicest locations on the island, with fantastic views of the water as well as overlooking the elegant pool area. The brunch offers an irresistible array of Asian and international options. Potato Head is best known for delicacies that span both sea and land, and guests with all manners of tastes will find an appealing selection. Variety of cocktails are also available for those wanting to celebrate a special occasion or simply live the high life!
Beach Republic is one of the best beach clubs Koh Samui has to offer, featuring modern Mediterranean design throughout it’s Ocean Club and Restaurant, Beach Republic Asian Fusion Spa, and sophisticated guest accommodations such as suites and pool villas. The famed Sunday Sessions brunch at Beach Republic has gained renown across Koh Samui as a vibrant weekly institution for both locals and travelers. Many call this the best brunch in Koh Samui, with the wide range of tantalizing dishes, but best of all there is always a great DJ on hand to provide the perfect beach brunch soundtrack.
Brunch at the Intercontinental Hotel Danang Sun Peninsula is the height of luxury. The resort is situated within the Son Tra Peninsula Nature Reserve and offers a stunning location atop a private bay. The Citron restaurant brings fine dining to this natural paradise. Citron takes brunch to new heights by offering a pairing with luxury Billecart-Salmon champagne. The resort was designed by architect Bill Bensley and the otherworldly architecture will provide the perfect forum for an unforgettable meal. Citron offers an extravagant buffet featuring the freshest seafood and of course, decadent desserts.
Martini Beach restaurant, in a charming little cottage on Occheteuil Beach, offers the perfect change of pace for those looking for a brunch with Italian inflections. Guests will have the pleasure of lying on beach chairs while nibbling at perfectly prepared Italian treats such as cured meats and pastas. Occheteuil Beach is a great backdrop to do a little swimming and enjoy the sun while lazing away the afternoon. One of Sihanoukville’s most popular beaches, Occheteuil is always buzzing.
Located on the best stretch of beach in Sentosa, Tanjong Beach Club is an idyllic retreat from Singapore though it is located only minutes from the city center. The Tanjong Beach Club combines mid-century modernism with classic colonial architecture. The club offers both leisurely and hearty weekly brunch and more specialized theme brunch parties. Known for attracting excellent DJ talent, the Tanjong Beach Club is the ideal blend of lively and laid back.
Those looking to recover from a night of partying along the beach in Boracay would be well served to select Star Lounge at The District for a restorative meal. The friendly environment caters to fun loving travelers and The District offers a wide array of brunch delights in a beautiful setting with both outdoor and indoor seating. The stunning views will only enhance the international selection of dishes focusing on fresh and local ingredients.
|
0.979448 |
CAR-T Cell Therapy: Will First To Market Mean First To Disappoint?
Gilead and Novartis have launched the first two autologous CAR-T cell therapies. Combined, their first year revenues were less than $250 million.
Significant manufacturing expansion plans have been announced despite a ramp constrained by pricing and reimbursement.
Despite increased risk, treatment is being shifted to out-patient settings to limit losses by containing cost.
So far, the oncology community has greeted CAR T cell therapy with extraordinary enthusiasm. We've had few effective treatment options for difficult-to-treat blood cancers like diffuse large B-cell lymphoma. The newest data from ASCO continues to suggest that CAR T cell therapy may represent an important advance for some patients.
Autologous CAR-T, short for chimeric antigen receptor T-cell, therapy is being pursued by dozens of gene therapy companies. Novartis (NVS) and Kite, a Gilead (GILD) company, were the first to market in late 2017 launching Kymriah and Yescarta, respectively. Now, a year later, the question to ask is will the first to market become the first to disappoint? My first article entitled Genomic Medicine: Catch the Gene Therapy Wave is a primer providing context for this article.
Both companies have led off their commercialization efforts with a focus on treatment center coverage and manufacturing capacity. As of September 30, 2018, there were 63 cancer centers for Kymriah and 64 for Yescarta.
Yescarta for refractory large B-cell lymphoma has a manufacturing turnaround of 17 days. They estimated their El Segunda, CA factory would have annual capacity of 4,000 to 5,000. This year, Gilead announced plans to build an EU CAR-T manufacturing facility along with expansion plans in the US.
Kymriah for acute lymphoblastic leukemia has a manufacturing turnaround of 22 days. Novartis has announced plans to build EU capacity in Germany and Switzerland. They have also signed a deal to manufacture in China. Novartis reported the first manufacturing batch control mistake which led to a fatal relapse.
The research, clinical and commercialization expenses were substantial. Add in the investment required to build manufacturing and treatment capacity, and it appears obvious that costs are dwarfing the minimal revenues achieved since launch. That said, the Novartis pipeline indicates they are "all-in" on CAR-T.
These medicines are not pills being manufactured by the millions with increasing economies of scale. These are expensive medicines that have been called "living drugs". The autologous process requires harvesting a patient's T cells in a process called apheresis. These T cells are engineered to produce CARs on their surface, becoming CAR-T cells. In two to three weeks, the cells have been multiplied in a lab and are ready to be infused back into the patient to seek out, recognize and kill cancer cells. This process can be seen in the above graphic.
How much is financial loss stopping centers from adopting Yescarta?
CAR-T has been commercially launched in an era of escalating price concerns. Price gouging has been added to the lexicon of the pharmaceuticals industry from the EpiPen pricing from Mylan (NASDAQ:MYL) to the antics of Martin Shkreli. Politicians are successfully using price control sound bites to generate support. Despite the desperate need of patients and the clinical evidence of efficacy, these treatments are too expensive to be priced low enough to escape public attention.
Gene and cell therapies also present an insurance coverage dilemma. One of the cornerstones of reimbursement is a determination of cost effectiveness. These are cutting edge medicines that cannot today prove durability which is an essential input variable in this equation. The lack of US single-payers creates insurer concerns that future treatment savings could accrue elsewhere if a patient changes to another provider.
Inpatient Payment: The current FY 2019 national Inpatient PPS payment rate for MS-DRG 016, into which CAR-T cases are grouped, is approximately $39,000. For providers subject to IPPS, the payment may be augmented by the full New Technology Add-on Payment or NTAP, and they may potentially also receive an outlier payment. The total payment providers are likely to receive will still leave the vast majority of inpatient CAR-T cases as substantially under-paid, given the high product acquisition cost.
New Technology Add-on Payment: While helpful to supplement a relatively low base payment rate, the NTAP mechanism is problematic in several ways for drugs acquired at a high cost, such as CAR-T.
In the case of CART, the maximum amount a center could receive for an NTAP payment is $186,500. This amount is a significant improvement over the MS-DRG 016 base payment of $39,000, but it is still $186,500 short of the acquisition cost that each provider is currently paying the manufacturers in order to deliver the intervention to a patient in need.
Access to Care Implications and Analysis: The financial losses associated with providing CAR-T treatment to Medicare beneficiaries is impacting access to care and will continue to do so unless the payment challenges are resolved.
Members have shared that their teams have felt compelled to consider one or more of the following treatment pathway modifications, due to the current payment systems: 1) Shifting some CAR-T therapy to the outpatient setting to recover product acquisition costs. 2) Choosing not to participate in the clinical studies associated with a Coverage with Evidence Development. 3) Electing not to provide commercial CAR-T products to any portion of their patient population.
A major concern of this therapy is its cost, which some estimates predict could go over $1 million per patient. Please talk about the cost vs value of CAR T-cell therapy.
We have reported on the very high response rates and durable remission rates in patients with ALL. We have treated more than 300 patients with Tisagenlecleucel in clinical trials, and a lot of that information was used for FDA approval of the therapy.
CAR-T cell medicines are breaking commercial ground in a new era of genomic medicine. Autologous CAR-T is extending the lives of patients suffering from incurable diseases. Yet the access to these potential curative medicines is being constrained by their cost and an insurance system trying to come to grips with a new pricing paradigm. So far, this does not appear to be slowing development. For example, bluebird bio (BLUE) exercised its option to co-develop and co-promote bb2121 in the US, as part of their collaboration with Celgene (CELG). Bluebird disclosed they would likely also opt-in for the second product candidate bb21217. These disclosures speak well to program progress. They also beg the question: Would bluebird be better off avoiding the cost of commercializing until Novartis and Gilead have made more progress on pricing and reimbursement? First movers in any market are usually embraced by investors, but it remains unclear if first movers in autologous CAR-T will grow fast enough to avoid being labeled first to disappoint.
Disclosure: I am/we are long GILD. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
|
0.964277 |
Adrien Chopin, Pascal Mamassian, Randolph Blake; Transition between stereopsis and binocular rivalry is based on perceived, rather than physical, orientation. Journal of Vision 2011;11(11):301. doi: 10.1167/11.11.301.
When dichoptically viewed gratings differ slightly in orientation, they can still combine binocularly to yield perception of a surface slanted in depth. With larger differences in orientation disparity, fusion gives way to binocular rivalry characterized by perceptual alternations between the left and right eye gratings, with no depth. Can this transition point between stereo-fusion and rivalry be shifted by induction of illusory shifts in perceived orientation? We addressed this question using a variant of the Zöllner illusion: When parallel short lines (inducers) are added to a near-vertical grating, repulsion appears between inducer and grating orientations. If stereopsis uses the perceived illusory orientations, vertical inducers should increase the perceived orientation disparity of the gratings and horizontal inducers should decrease it. In contrast, if physical orientations are used, inducers should have no effect on the orientation disparity. Observers were asked to judge the slant of a grating composed of near-vertical contours. Orientation disparity was varied adaptively to estimate the transition in orientation disparity between stereo-fusion and rivalry. If this transition depends on perceived, rather than physical, orientation, gratings should become more often rivalrous with vertical inducers and more often fused with horizontal inducers. Seven of eight observers (six na&ıuml;ve) exhibited reliable differences in this depth/rivalry transition point between vertical and horizontal inducer conditions, indicating that rivalry and stereopsis can be generated from illusory orientations. A second experiment in which observers reported their subjective experiences of rivalry corroborated this finding. The magnitude of that difference was approximately twice the classical Zöllner illusion: it suggests that shifts in illusory orientation arise at a monocular level, before the resolution of rivalry and stereopsis, and adds up between eyes. We are currently investigating whether comparable interactions occur when interocular differences are induced in motion direction, which is believed to be represented at binocular levels of processing.
Supported by NIH EY13358, grants from ED261 and Université Paris Descartes.
|
0.999962 |
On Saturday, June 9th, I finally made good on a conservation donation I made almost a year ago. Back in September I donated a day with me photographing nature to the Glacier-Two Medicine Alliance. The purpose of the donation was to help raise money for the organization and help it reach it's ultimate vision, "A child of future generations will recognize and can experience the same cultural and ecological richness that we find in the wild-lands of the Badger-Two Medicine today." Fitting! Based on the request of the person who purchased the auction item - me for a day - we headed into Glacier National Park. Our first stop on that mostly cloudy and rainy day, as it turned out, was Saint Mary's Lake, and Wild Goose Island. That location is best photographed in the morning. As I was learning to photograph Glacier National Park, I once drove back and forth from Browning, MT where I was living at the time, to Saint Mary Lake 25 times over the course of 25 days just to photograph it. However, on this outing we have only one day to get it right!
|
0.999999 |
I'm taking care of a friend with stage 2 Alzheimer's. I was wondering if there is any easy way to give them their meds without them fighting me every time.
There are several strategies you can try when giving medication to someone who is resistant to taking it. One strategy is to crush pills and put them into food, such as applesauce, jam, smoothies or shakes, fruit cocktails, and other flavorful foods that can disguise the taste of the medicine. Another possibility is to ask the physician to prescribe the medication in liquid form if available, and put it in the person's morning juice or milk. Be sure to check whether the medication states that it should not be crushed or taken with food.
Another strategy is to use distraction while lifting the medication cup to the person's lips. For example, try distracting the person by commenting on their hair-do or a new hat or something on the television. This approach is more likely to work in the mid to late stages of Alzheimer's.
It is important not to fight with the person or be too insistent. Try using a persuasive tone and approach. Sit down facing the person, smile, and address her by name. Then say something like, "It's time for your vitamins," or, "(Person's name), would you like to go for a walk outside? (Pause for response.) Here are your vitamins before we go." If the person still refuses to take the medications, let it go and try again in an hour or so.
Another option might be to use medication patches if available. Discuss medication options with the person's physician.
|
0.952721 |
Solve similar problems in the same way.
Software design comprises many similar tasks. There are plenty of design decisions that are similar to ones taken before. UP tells that a design is good when similar design problems are solved the same way. UP can be applied to a large variety of problems: naming identifiers, ordering parameters, deciding upon framework or library usage, etc.
Striving for consistency and always using the same solutions also means that it can be a good idea to apply a “bad” or less-well suited solution for the sake of consistency. If for example a bad naming scheme is used throughout the whole project, it is advisable not to break it as an inconsistency in the naming scheme would be worse than applying the bad naming scheme everywhere.
For documentation UP means to have a consistent documentation structure such that a certain piece of information can be found easily. Furthermore uniformity in naming schemes is especially important for documentation. When referring to the same concept the same word has to be used. Synonyms are a source of misunderstanding.
Following UP reduces the number of different solutions. There are fewer concepts to learn, fewer problems to solve and fewer kinds of defects that can occur. So the developers, whether the original ones or the maintainers, have an easier task in creating, understanding, and maintaining the software. By reducing variety in the design, the software becomes simpler (see KISS).
Documentation which follows a fixed structure helps you find a certain piece of information faster because as soon as you have understood the structure you know where to look.
UP demands solving similar problems in the same way and not just in a similar way. This is crucial as subtle differences can be dangerous. These small differences are created easily. Sometimes it is impossible to do two things exactly the same way. And also over time two modules may slowly diverge. So it is sometimes better to have two modules work completely differently than to allow for these subtle differences as they easily lead to misconceptions and mistakes (see ML).
This principle is newly proposed here. Nevertheless the idea is not new and should be pretty intuitive to every developer.
Murphy's Law (ML): A typical source of mistakes are differences. If similar things work similarly, they are more understandable. But if there are subtle differences in how things work, it is likely that someone will make the mistake to mix this up.
Note that UP can be contrary to virtually every other principle as it demands neglecting other principles in favor of uniformity.
Keep It Simple Stupid (KISS): Although UP normally reduces complexity, sometimes UP demands more complex solutions because they are already applied elsewhere and for the sake of uniformity shall also be applied in simpler contexts where they would not be necessary.
More Is More Complex (MIMC): Documenting something because of UP may result in unnecessary documentation. There may be more concise ways of documentation.
Model Principle (MP): UP may demand adhering to a certain naming scheme, which may not be best with respect to MP. See example 1: naming schemes.
Principle of Least Surprise (PLS): When applying UP, PLS should also be considered for naming modules. See example 1: naming schemes.
A typical example of the application of UP is the naming of method identifiers for common container classes like stacks or queues. This also shows that there are several ways to apply this principle.
Stacks typically have the methods push, pop and peek (sometimes also called top). push puts an item onto the stack, pop removes the top most item and peek retrieves the value of the top most item without removing it from the stack. This is how the common stack model describes this data structure (see MP). Applying UP to this naming decision means that the methods should be named precisely as they are named everywhere else also. So a developer knowing the model or other implementations of the model will immediately know how to use this module as well. In this case MP and UP demand the same thing. PLS is satisfied here as well as a developer knowing stacks will expect exactly that.
Queues on the other hand typically have the methods enqueue, dequeue, and peek (or front/first or the like). MP would demand naming the operations of a Queue module exactly that way. But there are several ways Up can be applied here. The one way is to apply the principle just like above. Resulting in methods enqueue and dequeue. This is how it is done in .NET1). The other way is to consider the method identifiers of the Stack module. A possible application of UP could be to demand naming the queue methods just like the stack methods, meaning also push, pop and peek. This is the naming scheme which was chosen in the Delphi RTL2). Here MP and UP are contrary. A further downside of this approach is that pop and push methods might be surprising for a queue class. So PLS would oppose this solution.
A third possibility is to find a common abstraction and to apply a very general naming scheme to all descendant classes (stack classes, queue classes and others). This is the way it is done in Eiffel3). Here there the method names are put, remove and item regardless of the concrete data structure. This is contrary to MP but creates a uniform naming scheme throughout the API. So there is less uniformity across APIs but stronger uniformity within the API. MP and UP are here contrary too. For PLS this means that a developer who is used to this philosophy is never surprised by having these methods. But developers new to it might be nevertheless.
This wiki has a certain structure which is uniform across all principles. Each principle description has the same sections with the same kind of information. This makes looking up principles much easier because one can directly jump to those sections containing the needed information. To mitigate the problem of unnecessary documentation (i.e. MIMC violations) sections without additional information are left blank instead of describing something obvious.
|
0.999989 |
Those who participate in a covenant group are more likely to create a culture of involvement within their congregations. What's that mean? It means pastors were more likely to involve their people in leadership and ministry. There was more participation by laypeople in each of these areas: 1) New member's classes 2) Communion 3) Worship leadership 4) Church ministries and 5) Rotation through leadership roles.
Pastors involved in a covenant group have churches with an organized presence and involvement of youth. This included the greater likelihood of a youth minister on staff. Additionally there were higher incidences of the following: 1) A youth program including conferences and camps 2) Congregational events planned and led by youth 3) Youth serving on congregational committees and boards.
There was more intentional involvement in the community including a vision of the congregation as a community change agent. Pastors engaged in a group led their churches with a strong emphasis on community service. There was an expectation within the church that the pastor would be out in the community representing the congregation.
Furthermore, pastors who were involved in a group enjoyed more congregational support for continuing education. Their churches committed more dollars to finance the ongoing retooling of the pastor. There were congregational expectations and requirements for the pastor to do continuing education.
These factors alone are enough to build a solid case for life-long engagement in a clergy learning group. But, there's even more reason for pastors to be a part of a LLC. The study also investigated whether there was any correlation between congregational growth and peer group involvement. The resounding answer was "yes." Participation in a group correlated with congregational growth.
There were two caveats, however. First, longevity in group involvement was a factor. The longer the pastoral leader participated in a group the more likely it was that his or her congregation would experience growth. The most productive years seemed to be in the fourth year of participation and beyond.
Second caveat, the peer group had to have structure, usually marked by a trained facilitator and an established curriculum.
The researchers discovered a strong relationship between congregational health as marked by growth and peer group involvement. Much stronger than they anticipated. This factor was as important as other, more obvious predictors of church growth. For the record, the other predictors of church growth, according to their research, included a youthful congregation, broad hands-on participation in ministry by the laity, little or no congregational conflict, spiritual vibrancy and clear mission.
The researchers noted a consistent thread in their findings: Pastors involved in structured peer groups tend to be missional leaders and are personally involved in their communities. Growth is one predictable result.
A group that is cohesive, "like a family"
A group whose practices focus on ministry improvement through exploring innovative ideas & resources as well as sharing/getting feedback about personal and ministry problems.
They summarized these characteristics with this comment: "peer groups that renew their members' ministries provide a stimulating mix of the practical, the intellectual, and the spiritual along with a certain amount of 'holding each others feet to the fire' in terms of accountability."
The results of this study impressed upon me that ABCNW is on the right track in providing Leadership Learning Communities for our pastors. LLCs are a great resource and they are working for us. I encourage every pastor to be an active participant in one. The next step for us in this journey with leadership clusters is the extension of their benefits to lay leaders. Concepts for doing this are being considered. So, stay tuned.
Thanks to Joe Kutter with ABC Ministers Council for directing me to this information. If you are interested in digging deeper into these findings go to Austin Presbyterian Seminary College of Pastoral Leaders.
|
0.967644 |
How to make wooden toy wheels using a circle cutter.
Most of the time it is a lot easier to buy ready made wooden wheels, but there may be a reason that you will want to make your own.
Who would have thought making a simple wooden disc would be such a challenge? As with most things in life, it is easy to do, but not so easy to do well.
It's best to use a drill press and set the speed to no faster than 500 RPM.
My experience with this tool leads me to believe it works best with soft woods like pine.
I have tried it on other types of wood with mixed results. If this is the first time you are using this tool, I would suggest practicing on a piece of scrap.
A circle cutter is designed to cut holes, but we want to use the leftover bit, that is the plug, to use as wheels. The cutters are shaped in such a way that it will leave a bevelled edge on the wheel.
What I did was to grind the bevel of the circle cutter on the opposite side so that the wheel came out nice and smooth.
It is important to remember safety first! I used a bench grinder while holding the cutter part of the tool firmly in a vise grip.
All I need to do now is to bolt the wheel into the drill and sand the edges smooth.
The photo shows a circle cutter in action. Note how the workpiece is clamped onto a piece of scrap.
Keep well clear of those spinning arms, and be sure to use a slow speed setting. Most circle cutters and wheel cutters recommend a maximum speed of 500 rpm.
With a nut and bolt to clamp the wheel in the drill, use a wood file to remove the bur and round off the edges.
Use a small sanding belt for a smoother finish.
For the final touch, plug the hole with a short length of dowel. Using a simple jig as shown clamped in place, drill a shallow hole with a 16mm spade bit.
A few things I have learned along the way.
To fit the axle to the wheel I prefer a snug fit. To relieve the pressure on the wheel, cut a groove about 6mm (1/4") deep in each end of the axle. Make sure to cut along the grain rather than across it. Use a junior hacksaw.
Fit the axle to the wheel, with the cut in the same direction as the grain. This tip serves two purposes, it minimises the tendency to split the wheel, and provides a tiny reservoir for the glue.
Make washers from plastic milk bottles using a hollow punch. This will help to prevent the wheel rubbing against the body of the toy.
Apply candle wax to the area of the axle that fits inside the body of the toy.
How to make large wooden toy wheels using this very simple but effective woodworking jig.
Making wooden toy wheels - from woodwork forums, how to make wagon type wheels.
Spoke wheel jig for model makers, also from woodwork forums.
How other toymakers make wooden wheels.
|
0.9194 |
For the Turkish-German footballer, see Cem Karaca (footballer).
Muhtar Cem Karaca (5 April 1945 – 8 February 2004) was a prominent Turkish rock musician and one of the most important figures in the Anatolian rock movement. He was a graduate of Robert College. He worked with various Turkish rock bands such as Apaşlar, Kardaşlar, Moğollar and Dervişan. With these bands, he brought a new understanding and interpretation to Turkish rock.
He was the only child of Mamos İbrahim Karaca, a theatre actor of Azerbaijani origin, and İrma Felekyan (Toto Karaca), a popular opera, theatre and movie actress of Armenian origin. His first group was called Dynamites and was a classic rock cover band. Later he joined Jaguars, an Elvis Presley cover band. In 1967, he started to write his own music, joining the band Apaşlar (The Rowdies), his first Turkish language group. The same year, he participated in the Golden Microphone (Turkish: Altın Mikrofon) contest, a popular music contest in which he won second place with his song Emrah. In 1969, Karaca and bass-player Serhan Karabay left Apaşlar and started an original Anatolian group called Kardaşlar (The Brothers).
In 1972, Karaca joined the group Moğollar (The Mongols) and wrote one of his best-known songs, "Namus Belası". However, Cahit Berkay, the leader of Moğollar, wanted an international reputation for his band, and he left for France to take the group to the next level. Karaca, who wanted to continue his Anatolian beat sound, left Moğollar and started his own band Dervişan (Dervishes) in 1974. Karaca and Dervişan sang poetic and progressive songs.
In the 1970s, Turkey was dealing with political violence between supporters of the left and the right, separatist movements and the rise of Islamism. As the country fell into chaos, the government suspected Cem Karaca of involvement in rebel organisations. He was accused of treason for being a separatist thinker and a Marxist-Leninist. The Turkish government tried to portray Karaca as a man who was unknowingly writing songs to start a revolution. One politician was quoted as saying, "Karaca is simply calling citizens to a bloody war against the state." Dervişan was ultimately dissolved at the end of 1977. In 1978, he founded Edirdahan, an acronym for "from Edirne to Ardahan"; the westernmost and the easternmost provinces of Turkey. He recorded one LP with Edirdahan.
In early 1979, he left for West Germany for business reasons, where he started singing in German language, too, namely since autumn 1980 first a lyric of Nazim Hikmet - Kız Çocuğu (in English: Little girl): Cem performed the German verses alternating with his friend, manager, arranger and bandleader/musician Ralf Mähnhöfer attending Cem on grand piano solo or by the band Anatology singing the song in Turkish language.
Turkey continued to spin out of control with military curfews and the 1980 Turkish coup d'état on September 12, 1980. General Kenan Evren took over the government and temporarily banned all the nation's political parties. After the coup, many intellectuals, including writers, artists and journalists, were arrested. A warrant was issued for the arrest of Karaca by the government of Turkey.
The state invited Karaca back several times, but Karaca, not knowing what would happen upon his return, decided not to come back.
While Karaca was in Germany his father died, but he could not return to attend the funeral. After some time, the Turkish government decided to strip Cem Karaca of his Turkish citizenship, keeping the arrest warrant active.
Several years later, in 1987, the prime minister and leader of the Turkish Motherland Party, Turgut Özal, issued an amnesty for Karaca. Shortly afterwards, he returned to Turkey. His return also brought a new album with it, Merhaba Gençler ve Her zaman Genç Kalanlar ("Hello, The Young and The Young at Heart"), one of his most influential works. His return home was received cheerfully by his fans, but during his absence Karaca had lost the young audience and acquired only few new listeners. He died on February 8, 2004 and was interred at Karacaahmet Cemetery in the Üsküdar district of Istanbul.
^ Barchard, David. Cem Karaca: A dissident Turkish singer, he spent much of the 1980s in exile. The Guardian, March 8, 2004.
^ (c)2000., Author by webofisi.com Copyright. "biyografi.net: Cem Karaca biyografisi burada ünlülerin biyografileri burada". www.biyografi.net. Retrieved 2016-12-11.
^ a b Sak, Sema (2004-02-16). "Cem Karaca:Sanat Yapar" (in Turkish). Aksiyon. Archived from the original on 2007-09-30. Retrieved 2007-03-04.
^ "Toto Karaca (1912-1992)" (in Turkish). biyografi.net. Retrieved 2007-02-05.
|
0.99998 |
Recommend greenMulti Comfort from Green to your friends.
The email addresses will not be used for other purposes.
Hello While surfing I found the following broadband product that might interest you: greenMulti Comfort from Green Some details about the product: Download speed: 15000 Kbit/s Upload speed: 1500 Kbit/s Setup fee: None Monthly price: CHF 69.00 You can find all the details here: http://www.tempobox.ch/details.php?lang=en&p=655&c=10064 Wish you happy and fast surfing!
Service offered by http://www.tempobox.ch No responsibility for the accuracy of this information.
|
0.960595 |
Rana Mitter, Professor of the History and Politics of Modern China, has been awarded a Leverhulme Major Research Fellowship for 2019–22.
Congratulations to Susan Divald who has received the Association for the Study of Nationalities award for best doctoral paper on Central Europe.
Is Myanmar's 'Buddhist Nationalist' movement a force of reform?
Dr Matthew Walton argues that the 'Buddhist Nationalist' movement, formerly known as the Ma Ba Tha, can be seen as a vehicle for challenging Myanmar's formal religious hierachy.
Sudhir Hazareesingh has been quoted in a book review in the New York Times (3 February) that discusses the popularity in France of books about the decline of the West in general , and France in particular.
Congratulations to Dr Matthew Walton, who has received an award from the Economic and Social Research Council (ESRC) for a project entitled ‘Understanding Buddhist Nationalism in Myanmar: Religion, Identity, and Conflict in a Political Transition’.
Alexander Betts gives TED talk on "Why Brexit happened - and what to do next"
Alexander Betts has given a TED talk on the UK's Brexit vote, why it happened, and what to do next.
Nic Cheeseman has written and article for Kenyan newspaper the Daily Nation (9 July) that reflects on the rise in racist attacks around the 'Brexit' referendum and considers the impact of the UK's colonial legacy on contemporary developments, as well as the changing contours of race and racism.
Sudhir Hazareesingh has been interviewed by French magazine Télérama (30 June) about what the possiblity of Britain leaving the EU means to him, personally and professionally.
Margaret MacMillan was interviewed for the 'World Update' programme on the BBC World Service (28 June) in an episode entitled 'Brexit: European Reaction', in which she was specifically asked about reports of an increase in racist incidents in the wake of the EU referendum result.
|
0.966168 |
This is a longer version of an article that appeared in Sunday Times, May 6, 2018.
HOW much alcohol is in ‘alcohol free’ beer? You might think that, even after a few pints, this would be an easy one to answer. But thanks to complicated labelling laws and a disagreement between the UK and Europe over definitions of ‘non-alcoholic’ an increasing number of beers on the market are labelled ‘alcohol free’ when in fact they contain 0.5% ABV (alcohol by volume).
A ill-tempered row between different beer makers and charities has frothed over, ahead of a deadline this week for submissions to a Government consultation, which hopes to make alcohol labelling clearer.
The need for clarification has been prompted by a huge surge in non-alcoholic and low-alcohol drinks. Last month, Department of Health documents showed that in just 12 months to July 2017 sales of this category rose by 20.5 per cent, as Millennials increasingly embrace so-called ‘mindful drinking’ by cutting back on booze.
A large number of small, independent brewers catering to this market have set up over the last year, such as Infinite Session, Nirvana and Fitbeer. They make beers that contain 0.5% or 0.3% ABV – compared with traditional beer with an ABV of between 4% and 5%. Though they clearly state their ABV level, they also describe themselves as “alcohol free” on their labels. They claim at this level the alcohol is negligible and drinkers would find it impossible to become inebriated. Also, this is labelling practice followed in Germany, Europe’s biggest low-alcohol beer market.
However, others say to call a 0.5% beer ‘alcohol free’ is misleading or even dangerous.
His rivals say he is driven by commerce — St Peter’s launched a range of 0% beer in 2016.
The current labelling laws state that any beer between 1.2% and 0.5% should call themselves ‘low alcohol’. Below 0.05% it can be called ‘alcohol-free’. The grey area is between 0.5% and 0.05%. The current laws say beer at this level should be called ‘de-alcoholised’, but this refers to a method of making beer or wine which involves stripping out the alcohol through a filtration process. The new generation of 0.5% beer mostly use brewing techniques to stop the fermentation process very early on; it has not be ‘de-alcoholised’.
|
0.71797 |
Dawid (David) Wdowiński (1895–1970) was a psychiatrist and doctor of neurology in the Second Polish Republic. After the 1939 invasion of Poland by Nazi Germany, he became a political leader of the Jewish resistance organization called Żydowski Związek Wojskowy (Jewish Military Union, ŻZW) active before and during the Warsaw Ghetto uprising.
Dawid Wdowiński was born in 1895 in Będzin. He studied at universities in Vienna, Brno and Warsaw, and became a psychiatrist. Wdowiński was a member of the right-wing organization Hatzohar, which was founded in Paris in 1925.
Before World War II, Wdowiński gave up psychiatry through the influence of Ze'ev Jabotinsky, who urged him to devote himself fully to the cause of Revisionist Zionism. Wdowiński became a chairman of the Revisionist Zionist party called Polska Partia Syjonistyczna.
In the summer of 1942, during the occupation of Poland, Wdowiński founded, along with many Jews from the Polish Army and Polish Jewish political leaders, the clandestine Jewish Military Union (ŻZW) in the Warsaw Ghetto. Some of the members of this group included Dawid Apfelbaum, Józef Celmajster, Henryk Lifszyc, Kałmen Mendelson, Paweł Frenkiel and Leon Rodl. Wdowiński was never a military commander, serving instead as political head of the ŻZW.
After the Warsaw Ghetto Uprising, Wdowiński was sent to various Nazi concentration camps, which he survived.
After the war, Wdowiński settled in the United States. Meanwhile, eyewitness accounts of the Warsaw Ghetto uprising were filtered through testimonies of former members of the left-leaning ŻOB. These accounts (also adopted by the postwar Polish Communist state) diminished both the roles and the importance of the ŻZW and Wdowiński. One such writer, Israel Gutman, was an activist in Hashomer Hatzair. Guttman's perspective continued in authoritative citations of Barbara Engelking and the Polish Center for Holocaust Research, who described Wdowiński as a senior activist in the Polish branch of Jabotinsky's New Zionist Organization; i.e. the "revisionist leader in the ghetto [who, in his memoir] attributes himself in command of the fighting organisation of this political movement." Another ŻOB fighter (Icchak Cukierman) wrote, "The Revisionists had seceded from the World Zionist Organization; and before the war, all socialist movements, including the Zionists, saw them as the Jewish ebodiminent of Fascism." Wdowiński candidly noted the pro-Soviet political orientation of the leftist Jews following the Soviet invasion of Poland: "The second, the confused political orientation, was largely because many Jewish leaders were reared in the spirit of the Russian Revolution, and they thought they could translate the ideas of the class struggle into Zionist terms."
In 1961, Wdowiński served as a witness at the trial of Adolf Eichmann in Jerusalem.
The situation with the historical record led Wdowiński, in 1963, to publish his own memoir, And We Are Not Saved, in which he writes about his involvement with the ŻZW and the Warsaw Ghetto uprising. Wdowiński was fiercely opposed to Jewish collaboration with Germany inside the ghettos, or any post-war reconciliation with them. This theme pervades his memoirs as well as his correspondence.
Dawid Wdowiński died in 1970 after suffering a heart attack at a commemoration ceremony for the Warsaw Ghetto Uprising. He was buried in the Mount of Olives Jewish Cemetery.
^ Israel Gutman Walka bez cienia nadziei (Struggle Without a Ray of Hope), 166, 224.
^ Warsaw Ghetto: Details of Chosen Records. Warszawa.Getto.pl.
^ (in English) Yitzhak Zuckerman" (1993). A Surplus of Memory: Chronicle of the Warsaw Ghetto Uprising. Los Angeles: University of California Press. p. 702. ISBN 0520078411. , pp. 226-27, n.
^ Wdowiński 1963, p. 5.
^ David Wdowiński (1963). And We Are Not Saved (222 pages)|format= requires |url= (help). New York: Philosophical Library. ISBN 0802224865. Note: Chariton and Lazar were never co-authors of Wdowiński's memoir. Wdowiński is considered the single author.
|
0.975506 |
Fade in data bound items?
I've spent a lot of time trying to get items that are added to a ListView or "source" bound element (i.e. <div data-bind="source: myData"></div>) to fade in using FX Fade (http://docs.kendoui.com/api/framework/fx/fade) instead of just appearing abruptly. Unfortunately, I have been unsuccessful. Can you explain how to do it in *both* cases (i.e. when using a ListView and when using source binding)?
kendo.fx($('div[data-uid="' + e.items[i].uid + '"]')).fade('in').play(); // THIS DOES NOT WORK.
myData.push('Test data item'); // SHOWS UP IN THE DIV, BUT DOES NOT FADE IN.
I am afraid that what you would like to achieve is not supported out of the box because when the ListView DataSource changes all the items will be re-rendered.
One possible workaround for this case it to listen for the change event of the DataSource (or Observable Array) and if an item was added to fadeIn only the last item.
Your method seems to work in Kendo UI Web, so thanks for that. I'm going to mark this as answered.
However, my project runs on both Web and Mobile (via Icenium/Kendo UI Mobile). Just FYI, your method does not work with Kendo UI Mobile because of an apparent bug with KendoMobileListView. For the KendoMobileListView, the DataBound even does not fire. I am unable to report this bug on the Kendo UI Mobile Premium Forums because I do not have a license for Kendo UI Mobile (since I use Icenium, there was no reason for me to purchase a separate Kendo UI Mobile license. It might make sense if you guys give Icenium subscribers access to the Kendo UI Mobile Premium Forums). I did, however, report this to Icenium, though it really seems to be a Kendo UI Mobile bug. The linked bug report also mentions how you can view a working example of this bug in Icenium.
|
0.999688 |
Though he was described by architectural historians as "humorless," Walter Gropius "was in fact a charismatic figure," according to The Guardian's Fiona MacCarthy. His life and career are shrouded in myths of solemnity and passionlessness, though the fact remains that he imparted a significant and long-lasting passion towards founding the Bauhaus as well as his own career as an architect.
Gropius was poetic in his writing (as MacCarthy highlights a passage from his Bauhaus Manifesto: “Together let us desire, conceive, and create the new structure of the future, which will one day rise toward heaven from the hands of a million workers like the crystal symbol of a new faith”), expressionist in his early work, and an advocate of healthy debate in his fabled school of art and design. And while Tom Wolfe famously criticized the architect as a champion of the modern high rise, MacCarthy points out that Gropius was not only one of many architects that believed in the potential of building vertically, but that he had also openly espoused humanist design in daily practice.
Time has a habit of mythologizing the misunderstood; the legacy of Walter Gropius is worth a reassessment, not least for the fact that the design principles of the school he founded have so positively affected our own today.
A funny memory of WG told by Ise, his wife to us when few of us visited her in 1980.
"the design principles of the school he founded have so positively affected our own today."
"The standardization of structural elements will have the wholesome result of lending a common character to new residential buildings and neighborhoods. Monotony is not to be feared as long as the basic demand is met that only structural elements are are standardised while the contours of the building so built will vary. Well-manufactured materials and clear, simple design of these mass-produced elements will guarantee the unified beauty of the resulting buildings."
From a modern, scientific point of view, did the results of Bauhaus design principles produce the unified beauty he claimed?
|
0.999059 |
Tell me this: What’s the most significant reason why so many musicians don’t succeed with a career in music? Is it that they have poor musical skills? No musical connections? Not enough money? The answer: none of the above. Sure, there are countless things that prevent musicians from becoming successful in this industry, but the biggest, most fundamental factor is... FEAR.
Most ruin their own music careers by letting their fears take over every aspect of what they do (or neglect doing). Some fears occur on a conscious level, while other fears are below the surface and are only observable to those looking for them. Unfortunately, whether you are aware of them or not, your fears can be very devastating to your music career. As one who trains musicians on how to build a successful career in music, I’ve seen this endless times.
However, these ideas are totally false (I talk about this a lot in my other articles about breaking into music). Fact is, it’s really not so hard to make a VERY good living in the music industry if you know what to do to earn great money as a musician (and actually DO it). With this in mind, it’s exactly becausethe above false beliefs about the music industry are so wide spread, that they cause many musicians to fear not being able to make money. They then do things that lead to the exact OPPOSITE of what is needed to earn a good living.
-You never really try to make good money in your music career. The worst thing possible is to expect to struggle making money in the music industry. When you do this, you live into your worst expectations.
-You go in the exact opposite direction of your music career goals. By expecting failure in terms of making good money, many musicians start thinking they’ll be better off going to college to get a degree in a non-musical field, working at a “secure” job and THEN going after their music career dreams in their spare time. Find out why this is a bad approach by reading this article about making a music career backup plan.
-You eat the goose that lays golden eggs. Note: What is written below could seem like “self-promotion”, since I mention how I mentor musicians as an illustration of a critical point. Of course, there is a very important lesson for you to learn here, and my words are true regardless of whether I am selling something or not. The lesson for you here illustrates how merely being AFRAID of becoming broke causes you to forever remain broke as a musician, until you make a significant change.
I sometimes hear from the musicians who have reservations about joining my music career success training program or going to my music business money making event (where I help musicians learn how to make tons of money in music). Even when I show them the proof of how all my programs have completely transformed the lives of musicians I’ve coached, they are STILL uncertain and full of fear. This doubt is rooted in the exact same mindset I discussed above - that it’s pointless to even “attempt” to become financially well off, since all musicians struggle to make ends meet. Ironically, by passing over training (that is PROVEN to work) for the sake of saving money, you guarantee that you will never make a lot of money with music. This is referred to as “eating the goose that lays golden eggs” because you decide to eat the goose now rather than wait for golden eggs to appear later. Rather than learning how to earn money in your music career and building toward the future, you give in to your fear... guaranteeing that you will never make progress to move your career to a higher level.
1. Know that the belief that all musicians struggle to make money isn’t true and it certainly does not have to be your reality. This understanding will motivate you to move closer to the things you WANT in your career, instead of closer to the outcomes you “fear”.
2. Instead of stressing about how “not to struggle with making money”, dedicate yourself to learning how to make tons of money as a musician. There is a clear (and rudimentary) difference between these 2 mindsets and the ends that each one leads to are complete opposites.
1. Be aware that all the negative things you say to yourself about why you can’t have a successful music career are all (false) excuses. You have a GIGANTIC amount of potential for success as a musician, regardless of your current age, what your musical background is like or the location of the town you live in. Read my other instructional music career articles to discover more about building a successful career in music.
2. Use the same mindset as successful musicians. As I explained already, there is a very distinct difference between “anticipating success” (in your music career) vs. “anticipating failure”. Successful musicians expect to succeed and do not focus on the possibility that they might fail - they focus on “achieving success”�and you should do the same.
3. Stack the deck of cards in your favor. You will greatly improve your odds for success in your music career (and lose your fear of failure), by navigating the music industry with your eyes open. Speed up your progress by getting training from a music career trainer who has already helped tons of musicians grow successful music careers.
1. The thoughts that dominate your mind, become your reality. If you adamantly believe you can’t become a successful musician (because of any of the things mentioned above), you will rationalize this and use it as a way to NOT move forward in your music career. After doing this, you are GUARANTEED to not succeed in the music business. The opposite of this is true as well: once you strongly believe that you are destined to achieve success as a pro musician, you will naturally do whatever it takes to reach your music career goals. Obviously, the latter mindset will have a MASSIVELY higher rate of success (both in the music industry and in life).
2. If you don’t even attempt to grow a successful music career - you have failed. Even worse than this guarantee of 100% failure, is you are going to regret not taking action to do what you dreamed of with music when you look back at all the opportunities you missed.
The music industry is filled with tales of woe from (failed) musicians who say that someone in the music industry has forced them into an unfair contract, refused to pay them the money the deserved or “screwed” them in some other way – causing their music career to go south. Tales like this make many musicians scared of getting involved with any business deals in the music industry and sometimes prevents them from even attempting to pursue a music career.
�and tons of other factors (test yourself to learn what the music business is looking for in you).
*Learn PRECISELY what the music business is looking for in you (this goes way beyond your musical skills).
*Get the pieces of value you are lacking to transform yourself into the #1 choice when music companies are looking for someone to invest in/work with.
After you have done these things, music companies will knock at YOUR door to give you the chance to do things most musicians only dream of.
To understand exactly what you must do to achieve your goals in the music business, work with a music industry success mentor.
|
0.979845 |
The anti-unification problem of two terms t<sub>1</sub> and t<sub>2</sub> is concerned with finding a term t which generalizes both t<sub>1</sub> and t<sub>2</sub>. That is, the input terms should be substitution instances of the generalization term. Interesting generalizations are the least general ones. The purpose of anti-unification algorithms is to compute such least general generalizations.
Research on anti-unification has been initiated more than four decades ago, with the pioneering works by Gordon~D.~Plotkin and John~C.~Reynolds. Since then, a number of algorithms and their modifications have been developed, addressing the problem in first-order or higher-order languages, for syntactic or equational theories, over ranked or unranked alphabets, with or without sorts/types, etc. Anti-unification has found applications in machine learning, inductive logic programming, case-based reasoning, analogy making, symbolic mathematical computing, software maintenance, program analysis, synthesis, transformation, and verification. Some of these algorithms and applications will be reviewed in the talk. We will also consider recent developments in unranked and higher-order generalization computation.
|
0.999998 |
Home Education for MMA What is Mixed Martial Arts or MMA?
Mixed martial arts (MMA), is a full contact combat sport that allows the use of both striking and grappling techniques, both standing and on the ground, from a variety of other combat sports. The roots of modern mixed martial arts can be traced back to the ancient Olympics where one of the earliest documented systems of codified full range unarmed combat was in the sport of pankration. Various mixed style contests took place throughout Europe,Japan and the Pacific Rim during the early 1900s. The combat sport of vale tudo that had developed in Brazil from the 1920s was brought to the United States by the Gracie family in 1993 with the founding of the Ultimate Fighting Championship (UFC), which is the largest MMA promotion company worldwide.
The more dangerous vale-tudo-style bouts of the early UFCs were made safer with the implementation of additional rules, leading to the popular regulated form of MMA seen today. Originally promoted as a competition with the intention of finding the most effective martial arts for real unarmed combat situations, competitors were pitted against one another with minimal rules. Later, fighters employed multiple martial arts into their style while promoters adopted additional rules aimed at increasing safety for competitors and to promote mainstream acceptance of the sport. The name mixed martial artswas coined by Rick Blume, president and CEO of Battlecade, in 1995. Following these changes, the sport has seen increased popularity with a pay per view business that surpasses boxing and professional wrestling.
Stand-up: Various forms of Boxing, Kickboxing, Muay Thai, Taekwondo, and Karate are trained to improve footwork, elbowing, kicking, kneeing and punching.
Clinch: Freestyle, Greco-Roman wrestling, Sambo and Judo are trained to improve clinching, takedowns and throws, while Muay Thai is trained to improve the striking aspect of the clinch.
Ground: Brazilian Jiu-Jitsu, Submission Wrestling, shoot wrestling, catch wrestling, Judo and Sambo are trained to improve ground control and position, as well as to achieve submission holds, and defend against them.
Some styles have been adapted from their traditional form, such as boxing stances which lack effective counters to leg kicks and the muay thai stance which is poor for defending against takedowns due to the static nature, or Judo and Brazilian Jiu-Jitsu, techniques which must be adapted for No Gi competition. It is common for a fighter to train with multiple coaches of different styles or an organized fight team to improve various aspects of their game at once. Cardiovascular conditioning, speed drills, strength training and flexibility are also important aspects of a fighter’s training. Some schools advertise their styles as simply “mixed martial arts”, which has become a style in itself, but the training will still often be split into different sections.
While mixed martial arts was initially practised almost exclusively by competitive fighters, this is no longer the case. As the sport has become more mainstream and more widely taught, it has become accessible to wider range of practitioners of all ages. Proponents of this sort of training argue that it is safe for anyone, of any age, with varying levels of competitiveness and fitness.
Brazilian Jiu-Jitsu came to international prominence in the martial arts community in the early 1990s, when Brazilian Jiu-Jitsu expert Royce Gracie won the first, second and fourth Ultimate Fighting Championships, which at the time were single elimination martial arts tournaments. Royce fought against often much-larger opponents who were practicing other styles, including boxing,Wrestling, Amateur Wrestling (including Freestyle, Greco-Roman, and American Folkstyle), shoot-fighting, karate, judo and tae kwon do. It has since become a staple art for many MMA fighters and is largely credited for bringing widespread attention to the importance of ground fighting. Sport BJJ tournaments continue to grow in popularity worldwide and have given rise to no-gi submission grappling tournaments, such as the ADCC Submission Wrestling World Championship. It is primarily considered a ground-based fighting style, with emphasis on positioning, chokes and joint locks.
Amateur Wrestling (including Freestyle, Greco-Roman, and American Folkstyle) gained tremendous respect due to its effectiveness in mixed martial arts competitions. Wrestling is widely studied by mixed martial artists. Wrestling is also credited for conferring an emphasis on conditioning for explosive movement and stamina, both of which are critical in competitive mixed martial arts. It is known for excellent takedowns, particularly against the legs. Notable fighters Chael Sonnen, Randy Couture, Dan Henderson, Jon Fitch, Cain Velasquez, Brock Lesnar.
Karl Gotch was a catch wrestler and a student of Billy Riley‘s Snake Pit in Whelley, Wigan. In the film Catch: the hold not taken, some of those who trained with Gotch in Wigan talk of his fascination with the traditional Lancashire style of wrestling and how he was inspired to stay and train at Billy Riley’s after experiencing its effects first hand during a professional show in Manchester, England. After leaving Wigan, he later went on to teach catch wrestling to Japanese professional wrestlers in the 1970s to students including Antonio Inoki, Tatsumi Fujinami, Hiro Matsuda, Osamu Kido, Satoru Sayama (Tiger Mask) andYoshiaki Fujiwara. Starting from 1976, one of these professional wrestlers, Inoki, hosted a series of mixed martial arts bouts against the champions of other disciplines. This resulted in unprecedented popularity of the clash-of-styles bouts in Japan. His matches showcased catch wrestling moves like the sleeper hold, cross arm breaker, seated armbar, Indian deathlock and keylock.
Karl Gotch’s students formed the original Universal Wrestling Federation (Japan) in 1984 which gave rise to shoot-style matches. The UWF movement was led by catch wrestlers and gave rise to the mixed martial arts boom in Japan. Wigan stand-out Billy Robinson soon thereafter began training MMA legend Kazushi Sakuraba. Catch wrestling forms the base of Japan’s martial art of shoot wrestling. Japanese professional wrestling and a majority of the Japanese fighters from Pancrase, Shooto and the now defunct RINGS bear links to catch wrestling.
The term no holds barred was used originally to describe the wrestling method prevalent in catch wrestling tournaments during the late 19th century wherein no wrestling holds were banned from the competition, regardless of how dangerous they might be. The term was applied to mixed martial arts matches, especially at the advent of the Ultimate Fighting Championship.
Using their knowledge of ne-waza/ground grappling and tachi-waza/standing-grappling, several Judo practitioners have also competed in mixed martial arts matches. Anderson Silva, who is the top ranked fighter in the world maintains a black belt in judo, former Russian national Judo championship Bronze medallist Fedor Emelianenko, famous UFC fighter Karo Parisyan, Olympic medallists Hidehiko Yoshida (Gold, 1992), rising contender Dong Hyun Kim is a 4th degree judo black belt, and Ronda Rousey (Bronze, 2008) now Strikeforce Women’s Bantamweight Champion.
Paulo Filho, a former WEC middleweight champion has even credited judo for his success during an interview.
Karate has proved to be effective in the sport as it is one of the core foundations of kickboxing, and specializes in striking techniques. Various styles of karate are practiced by some MMA fighters, notably Chuck Liddell, Lyoto Machida, Stephen Thompson, John Makdessi, Ryan Jimmo and Georges St-Pierre. Liddell is known to have an extensive striking background in Kenpō and Koei-Kan whereas Lyoto Machida practices Shotokan Ryu, and St-Pierre practices Kyokushin.
Muay Thai, like boxing and various forms of kickboxing, is recognised as a foundation for striking in mixed martial arts, and is very widely trained among MMA fighters. Countless mixed martial artists have trained in Muay Thai, and it is often taught at MMA gyms as is BJJ and Wrestling.
Muay Thai is the style which is used predominantly for the stand-up game in MMA. It originated in Thailand, and is known as the “art of eight limbs” which refers to the use of the legs, knees, elbows and fists. It is a very aggressive and straight forward style.
A very popular Korean martial art, Taekwondo has had mixed success. While many practitioners have a background or have trained in Taekwondo in the past, due to the sparring rules Taekwondo is traditionally sparred in, it often requires cross-training with kickboxing for full contact strikes. Nonetheless, the excessive kicking is recognized as a good way to keep the opponent at a distance, score points and even effectively knock someone out. Fighters such as Anderson Silva, Cung Le, Benson Henderson and Anthony Pettis are notable fighters who have successfully used Taekwondo techniques in mixed martial arts competition.
Text on this page taken from Wikipedia under the Creative Commons License.
|
0.999999 |
Context: Blogged this the day that I took part in The Amazing Race (hence the gear) – we came 3rd out of 20 teams.
On arrival in our current country of residence, Papua New Guinea, we endured 14 weeks of cross-cultural orientation. Some of it was horrible. Most of it was fun. All of it was challenging. One of the challenging things turned out to be a video series we watched. It followed the life of one man – Joe Leahy (surname pronounced Lay). Joe was spawned by the exploits of the Leahy brothers, the first white men to ever set foot in the New Guinea highlands. They did so in the early 1930s and where they thought they’d find nothing but gold, they found a million people. Black Harvest was one in this series of films that we watched.
Joe Leahy is something of an anomaly. A man caught between two cultures. Sired by a white man who abandoned him to his local mother, he was always different and, though sheer hard work and a couple of opportunities that came his way through his white connections, he became a wealthy highlands coffee farmer. He then entered into partnership (of sorts) with local landowners firstly to work his own plantation and then to work in partnership on a jointly owned venture.
The documentary film Black Harvest tells the story of Joe’s joint venture with the Ganiga tribe, of walking cultural tightropes, of the eventual decline of the local status quo into anarchy and a 9 year tribal war, and the destruction of millions of dollars worth of coffee beans as they turned black on the trees. The video is fascinating, full of drama and portrays great insights into Papua New Guinean culture.
A few weeks ago, a friend of mine asked me if I’d read the book about the making of the film. I had no idea there even was one. But it seemed that after the wife of the couple who filmed it had died, her husband went through her diaries and discovered that she’d captured virtually everything they had encountered while filming it. As a form of grief, he decided to write their account.
Bob Connolly writes well. Not as well as he films, but well nonetheless. There are many gripping passages in the book as the the couple find themselves caught up in events that they hadn’t even imagined might take place. As the Ganiga get involved in tribal warfare, they find themselves faced with friends on each side becoming enemies of each other. Some of the stories are tragically heart-rending. Many die.
Having seen the film, it was wonderful to get this behind the scenes detail, to understand what it took to make the film, how the film materialised out of nowhere, completely against their expectations. And it was also great to get more of an insight into the character of Joe Leahy and what it was like to work with him.
The documentary series really is remarkable and I wish it was more widely available to people the world over. If you’re in Australia, you might be able to get hold of it. But wherever you are you’ll be able to find Making Black Harvest. If you want to get a glimpse into the world we now live in, a world that very few people know about, I’d recommend getting a copy of it.
Black Harvest became a viable proposition when I flew to London in early 1989 to meet an overworked man in a cramped office on the fifth floor of 60 Charlotte Street, W1.
|
0.962453 |
What is the peak voltage at the tips of a dipole antenna?
What is the peak voltage present at the very end tips of half-wave dipole antenna in free space, and how might this peak voltage relate to transmitter type, transmitter power, RF frequency vs. antenna half-wave frequency mismatch, feed line, SWR, wire diameter, and etc.
It's really hard to say, because it depends on so many things. Analyzing the antenna in free space simplifies some things, but we'd still have to consider the exact geometry of the antenna (How thick are the wires? Are they bent at all?) and the material from which they are made (any resistance will decrease the Q factor of the antenna, reducing the peak voltage).
However, if you look to analysis of end-fed dipoles, you can find that it's been determined empirically and by modeling that the impedance at the end of a real half-wave dipole is somewhere between 1800 to 5000 ohms.
This is the power delivered to the antenna. Transmitter type and the match to the antenna (SWR) are relevant only to the extent that they change the power delivered to the antenna.
This is all assuming that the antenna is operated at resonance. As the frequency moves away from resonance, the peak voltage decreases. The reason why is simple: this high voltage is attainable because each cycle reinforces the previous. If you take a limiting case where the antenna is operated at DC, the voltage at the ends is equal to the voltage at the feedpoint, because there is no resonance to reinforce the voltage at the ends.
I'm going to approach this a little differently starting from roughly the same place. Here I am going to use a resonant $\lambda$/2 20m dipole driven by 100 W as the model.
EDIT: Most of the above equations come from the section on Circuital Design in the reference listed above. The book is more math heavy than typical amateur radio references, but not as bad as some of the more modern engineering texts. It's slow going, but a worthwhile read.
For those of you with EZNEC or some other antenna modeling software, there is a way to answer this question using EZNEC. Model a 1/4WL stub in free space taking care not to violate any geometry checks. Put a 100w source at one end of the stub and a 10 megohm load at the other end of the stub. Adjust the stub length and user defined wire loss to obtain the resistive feedpoint impedance of a dipole looking into the stub. Then display the load data. It will tell you the voltage at the end of a dipole. I set my 4 MHz stub for 70 ohms and got 879 volts across the 10 megohm load resistor. The additional user defined wire loss is equivalent to the power lost from the dipole through radiation.
Not the answer you're looking for? Browse other questions tagged antenna antenna-theory wire-antenna or ask your own question.
Can the tip of an antenna burn insulation/drywall?
Why do folded dipoles have greater bandwidth than ordinary resonant dipoles?
What sort of radiation efficiency can one expect from a folded dipole?
|
0.999443 |
Escape with a beach getaway!
Fall beach getaways are the perfect way to spice up the dreary days of autumn. What's more, with kids back in school, fall is one of the best times to escape to the sun and surf without having to deal with massive crowds and high price tags.
Who says summer is the best time to hit the beach? Fall beach vacations present the ideal opportunity to experience a change of scenery as the temperatures dip. A beach getaway can help you relax and rejuvenate in a serene setting. This is especially true in autumn, which is considered the "off-season" at many popular beach resorts. The cool weather brings with it minimal crowds, fabulous savings on vacation packages and a chance to enjoy the peace and tranquility afforded with an oceanfront view.
Hilton Head Island also offers unlimited activities for history buffs. The resort community has a storied past dating from Colonial times and visitors are welcome to explore all of its historical attractions.
Fall is one of the best times to save on a Hawaiian beach vacation. While September signals the end of the summer season for the continental United States, Hawaii in not affected by major temperature changes. The weather in the 50th state is outstanding year-round, which makes Hawaii a top beach destination, especially during the autumn months when the summer crowds have faded. The slight dip in visitors in the fall causes prices to ease up a bit in the Aloha State. September, October and November traditionally bring with them reduced airfare to the islands, as well as discounted hotel and car rental rates.
If you are looking to take a vacation after the school year starts, then head to Amelia Island. That's where you will experience a fall beach getaway you'll remember for years to come. Nestled in the northeastern-most corner of Florida, just off the coast of Jacksonville, Amelia Island is bursting with charm. Balmy temperatures, breathtaking sunsets and miles and miles of pristine beaches dictate the island's laid-back lifestyle. Life on Amelia Island revolves around the water. Popular pastimes include scenic horseback rides along the beach, kayak adventures, river cruises, and ferry tours. Amelia Island also boasts a charming historic downtown section, which features quaint little shops and waterfront restaurants. Amelia Island is also home to a variety of hotels that cater to travelers with varying price points. Frugal visitors can stay at the affordable Residence Inn while those looking for more luxurious accommodations can book a room at the Ritz-Carlton, which boasts high-end suites and its own spa.
Bald Head Island, North Carolina, is a hidden gem when it comes to beach resort communities. The island is home to 14 miles of awe-inspiring white sand beaches, rich wildlife, quaint maritime shops, and the state's oldest lighthouse, Old Baldy. The historical attraction provides breathtaking views of the island from its top. What's more, during the fall, Bald Head Island becomes a retreat for overstressed travelers. Cars are not permitted on the island, so the chaos of everyday life seems to melt away upon arrival. If you are looking to de-stress in a hassle-free environment, then head to Bald Head to enjoy a fall beach vacation you won't soon forget.
|
0.936576 |
Constructive learning algorithms have been proved to be powerful methods for training feedforward neural networks. In this paper, we present an adaptive network topology with constructive learning algorithm. It consists of SOM and RBF networks as a basic network and a cluster network respectively. The SOM network performs unsupervised learning to locate SOM output cells at suitable position in the input space. And also the weight vectors belonging to its output cells are transmitted to the hidden cells in the RBF network as the centers of RBF activation functions. As a result, the one to one correspondence relationship is produced between the output cells of SOM and the hidden cells of RBF network. The RBF network performs supervised training using delta rule. The output errors of the RBF network are used to determine where to insert a new SOM cell according to a rule. This also makes it possible to let the RBF cells grow while the SOM output cells increasing, until a performance criterion is fulfilled or until a desired network size is obtained. The simulation results for the two-spirals benchmark are shown that the proposed adaptive network structure can get good performance and generalization results.
|
0.999996 |
Why does eating a protein meal late at night prevent morning sickness?
When you are very early in the pregnancy, your body has not yet adjusted to "eating for two" and may not be regulating your blood sugars as well as usual. While you are sleeping, and not consuming any foods, the fetus is still "eating" your stored calories. This makes your blood sugar dip low by morning, which typically causes nausea (even when not in pregnancy).
Proteins are metabolized slower in the body than are carbohydrates, meaning that the calories last longer when you eat proteins and keep your blood sugars more stable when fasting at night. That is why if you eat a protein snack at night (cheese, meat, peanut butter, etc.), it can help your blood sugar stay more stabilized until breakfast.
Often, if you will eat the very first thing upon waking up one or two plain soda crackers (aka saltines), it will help prevent morning sickness because they are a rapidly absorbed source of carbohydrates that can more quickly increase the blood sugar levels to replenish. Once that has relieved any nausea from the low blood sugar, you then eat a nutritious breakfast (that includes proteins to steady the blood sugars) to help you start the day with fewer morning sickness symptoms.
Not all women experience morning sickness, which is another sign that it is likely associated with how a woman's body metabolizes calories and stabilizes blood sugar levels. That is why, in some women whose bodies do that less well than others (even when not pregnant), "morning sickness" can happen any time of the day and not just in the morning. It also can happen either only at the beginning of pregnancy or continue longer, even up until delivery in some unfortunate women.
The best all around plan for avoiding morning sickness symptoms (or nausea due to hypoglycemia at any time of the day), is to eat small portions but more frequently. These frequent mini-meals should include all types of foods, i.e, proteins, complex carbohydrates (like fruits and vegetables), fats, whole grains, etc., and avoid as much as possible the simple carbohydrates (like saltines, most breads that are not whole grain, sweets, and other "starches" like corn, white rice and white potatoes). Five or more smaller meals is much better than only three big ones. Some women switch to eating every two or three hours all day with protein at night to keep the blood sugars even.
Ask your obstetrician to suggest the most appropriate diet for you while your baby develops.
|
0.999921 |
Develop a compliance inspection checklist for safety and health based on a company processes.
2. Conduct an evaluation on your organisation using the developed checklist. (Your evaluation should include pictures showing the findings, description of findings, status of conformances, and corrective / preventive actions (if applicable), action officers and due date.
|
0.978305 |
People rushing to jump in the helicopter.
Two journalists have reported from the mountain where thousands of ethnic Iraqi Yazidis are at risk of starving as they flee extremist ISIS militants.
CNN senior international correspondent Ivan Watson accompanied an Iraqi Air Force helicopter for "an emergency aid delivery turned rescue mission."
Right as the helicopter landed on Mt. Sinjar and began unloading supplies, people tried to jump in to be saved About 20 civilians were rescued, but thousands remain as others die around them.
"I've been doing this job for more than 10 years - I have never seen a situation as desperate as this, as emotionally charged as this, and I've never seen a rescue mission as ad hoc and improvised as this," Watson said.
|
0.999999 |
I’ve heard about a new trend that involves drinking “raw water.” What is it, and is it good for me?
“Raw” or “live” water is not treated to remove or reduce minerals, ions, particulate, or, importantly, potential pathogenic bacteria and parasites. Raw water is found in rivers and natural springs, and is being sold at premium prices by some companies, according to published reports.
According to those recent published reports, selling raw water is part of a natural foods or health trend. The idea is that because this water still retains its natural mineral concentration, comes directly from earth springs, is unfiltered, and is untreated with chemicals such as chlorine and fluoride, it is a healthy alternative.
However, the Centers for Disease Control and Prevention says that while water flowing in streams and rivers of the backcountry might look pure, it can still be contaminated with bacteria, viruses, parasites, and chemical contaminants. The agency warns that drinking contaminated water can increase the risk of developing certain infectious diseases caused by pathogens such as Cryptosporidium, Giardia, Shigella, and norovirus, in addition to others.
In fact, there were some 42 waterborne disease outbreaks associated with drinking water in the United States from 2013 to 2014, resulting in at least 1,006 cases of illnesses, 124 hospitalizations, and 13 deaths, according to the CDC. Ohio was among those impacted states.
The biggest culprit was Legionella, which was associated with 57 percent of these outbreaks and all of the deaths, the CDC said.
One way to deter such waterborne disease outbreaks is through effective water treatment and regulations, which can protect public drinking water supplies in the United States, the CDC said.
And those consumers who want to take additional precautions when camping, hiking, or traveling to regions without strict water treatment programs can find additional information on filtration, boiling, and other practices from the CDC website.
It’s important to note that just because something is labeled natural, unprocessed, or raw, doesn’t automatically mean that it is healthy or better for you. And while there are some foods and drinks that are safe to consume raw, water is not one of them.
|
0.999861 |
TL;DR: You can now tweak all parameters of effect settings on the BandLab web Mix Editor. Adjust delay times, reverb amount, EQ, gain and much more.
On every BandLab platform there are 37 different preset effects for guitar, bass and vocals. We’ve designed each of these individual presets with a carefully balanced combination of predetermined settings, chosen by our in-house sound and audio engineers.
What’s that? You want more!! SURE. More control, more possibilities, more TONE. Now, with our web Mix Editor you can tweak each individual effect parameter on any of the presets. If you love Hi-Vox effect for vocals but want even more reverb? Head to the web platform and dial it up. You now have the flexibility to 100% customise your sound.
Once you get comfortable with the presets and want to branch out, you can use this new feature to create and build your own effects chain from scratch. Select your effects, arrange them in any order and turn any on, or off. The possibilities are huge.
Every effect configuration or setting that you create on BandLab web is instantly accessible from any device – web, iOS and Android, making these effects truly cross platform. Create your desired sound on the web first and have them ready to use on your mobile phone.
BandLab now designs and manufactures our own hardware. Turn your phone into an amp by using the Link in combination with all of our great preset effects on BandLab for iOS and Android. Get the best sound in mobile recording with the BandLab Link Analog.
Making life simpler and putting everything you need in one place is our M.O. at BandLab. Now in your BandLab Android main menu, or by using Force Touch on iOS you can instantly access our integrated QR code scanner. Scan away!
Stay in tune, no matter your genre or style. BandLab for iOS and Android comes with a completely free and easy to use built in Tuner. Figure out what note you are singing, or flip between multiple pre-loaded tunings, from open G to ukulele mode.
|
0.999978 |
No, I don't think you can say syntactic sugar doesn't give any business value.
Can you write the shorter form more quickly and maintain it easier? There's your business value.
|
0.95901 |
Beyond what limits does the medium of light lead you in your artistic work?
The medium of light is extremely important in my artistic practice; it facilitates the transposition of my drawings across different sites enabling me to interact, modify and reconfigure them in new ways. Using over-head projectors, I project acetate scans of my drawings into spaces and work back into them site-specifically, obscuring sections of the projection bed and using various media to alter the overall effect of light. I am interested in the way in which the drawings are transformed through their magnification and transition from paper to room, in particular how these changes affect the development of the expanded drawings. I think of this process as a cybernetic system whereby the work develops through a process of human-material engagement and feedback. Liberating the drawings from the constraints of their original media, i.e. paper, ink, pro-marker and graphite, by translating them into the medium of light, affords them a new kind of agency. No longer limited by material constraints, the projected image can affect any object it encounters. I am interested in the productive effects of difference created through these material encounters, specifically diffraction phenomena resulting from the layering of drawings on the projection bed, and the interference of objects within the projected image. Over-head projectors afford immediacy in alterations of these images. By covering sections of the projection bed, or layering acetate prints, the projected image is immediately changed, with digital projectors the alteration of the image is always mediated by a computer.
|
0.999995 |
Work with various monitoring devices in the surveillance control application with tools to set up a camera's field of view, lens focal length, manage the CCTV data storage and calculate bandwidth. Importing plans of the premises from AutoCAD, Google Earth or Visio is possible.
IP Video System Design Tool is a software program that offers the tools to efficiently design complex video surveillance systems.
It is quite easy to install, though it does take a few minutes to complete the installation.
It has a simple interface, it lacks theme customization options, but it does impress with the way the menu and the buttons are displayed into the front window.
Besides that, the response time of user actions is fast, but at the same time, the program uses a lot of computer memory, especially when you start the program.
The features include tools that can calculate the focal length of a camera, viewing angles and pixel density.
Also, it helps you lower the costs of your security system and improve efficiency by finding the right places to place the cameras.
In addition to those, IP Video System Design Tool offers the possibility to import AutoCAD drawings and load floor plans from JPEG, PNG or TIFF files. Export options to formats such as PDFs, MS Word, Excel are also available.
To sum it up, IP Video System Design Tool is a useful tool for individuals who seek to design surveillance systems in a professional manner, but it's a tool which requires some previous experience in this area, and which may be a bit too expensive.
|
0.985549 |
I currently have a 10 month old mixed dog who we suspect has some Retriever, Shepherd and Chow Chow in him.
We got him from a family's friend's friend when he was 7 months old because they just didn't have enough time to care for him.
We don't know much of anything before that, but it is likely that he was from a puppy mill/backyard breeder. He was already neutered before we got him and the past owners never mentioned any problems, though we never asked.
We had a few problems with him: leash-pulling and mouthing, but we have already fixed that.
And, in the beginning, he was fine with other dogs, except for the small dogs. He would and still does bully smaller dogs.
But now, he seems to also like to bully and attack dogs his size, not just the small ones, and only calms down around dogs that are bigger and stronger than him. We have to be extremely careful when we see another dog when we are walking him and keep a tight leash just in case.
As there was one time, where I stood to the side and moved way out of the path for a little dog and the owner to pass, when my dog's leash suddenly snapped and my dog raced for the little dog and pushed him over biting him and going for the throat. I pulled him back and apologized greatly to the owner and I carried him home, since the leash had apparently completely snapped.
We used to go to dog parks too just so he can meet dogs bigger than him that can calm him down, but we just simply don't really trust him around other dogs anymore.
I guess I'm looking for advice.
We did go to a dog trainer for his leash training, mouthing and his food aggression, but the dog trainer can't seem to fix or give us any working ideas for his aggression.
He is perfectly fine with humans but he just snaps when he sees other dogs his size or smaller.
|
0.944757 |
To hear the news of one of my favorite gaming studios closing was disheartening. As a gamer coming from the time when adventure games truly gained ground with Sierra On-Line releasing their series of ground-breaking titles and more, the Telltale concept of game design was truly enjoyable to see. They captured the essence of the early textbook adventure game concept called "choose-your-own-adventure" where readers made different choices as they read the book to progress through the story with different results.
These books came around the time of Dungeons & Dragons (D & D) and also other text-based adventure games such as Hitchhiker's Guide to the Galaxy found on the Commodore 128 as well, with written narrative leading to choice-driven gameplay. In D & D, the games focused on choice that drove the role-playing game (RPG) forward for players with advancing their player and evolving throughout the game, whereas the text-based adventure focused on the narrative and solving puzzles to progress to the end.
Adventure gaming became very popular when it came to the first graphical titles from Sierra and had a run for quite some time but ended up losing some traction when other new genres and consoles gained popularity. Starting on PC, the adventure genre did not move to the console until quite some time in the future as the interface for consoles was not as translatable to the way adventure games played. They originally started as text typing on the keyboard and then moved to the use of the mouse, both lacking from consoles.
Telltale Games came to the rescue. With their start in 2004, ex-LucasArts members created the studio and focused on dedication to the Adventure Game genre and focused on "games using intellectual properties with small but dedicated fan bases" according to Wikipedia. This, along with the use of the Telltale Tool, established as a company focusing on "adventure games with a novel episodic release schedule over digital distribution".
My first exposure to Telltale games was with Jurassic Park and Back to the Future but I did not start paying real attention until a friend recommend I play Wolf Among Us. This is the game that grabbed my attention with its unique art style and creative twist on fairytale lore. Everything about the game was uniquely different from what I had seen previously in the Adventure-gaming genre.
Episodic gaming was novel and I found it to be quite enjoyable. In my younger days I had plenty of time to play lengthy game titles and long adventures, but eventually life became busier and time more scarce. Episodes with decent amount of content for a low-price point were attractive. I could enjoy about 4+ hours of gameplay on an initial playthrough, keeping track of my choices, and play through again to get another few hours of enjoyment.
The end of each episode contained cliffhangers between episodes which kept me going, however, this is the one area where I found a complaint - many of the titles remained with cliffhangers and non-closure for many titles. It felt as if I was left hanging as a consumer while the company kept moving on to other titles. It made sense to have a number of titles, but maybe finishing some would have been good for the fans. This was seen with the request from fans for a sequel to Wolf Among Us that never got completed despite the large fan request.
From a game designer perspective I found the episodic concept to be influential on my own design work on our adventure title, Wry Reveries, currently in production by our studio, Live in the Game, LLC. The game is not split into episodes but rather each title in the series of Reveries will contain different themes and a different protagonist from history. This is something that Telltale does by focusing on different stories among their Intellectual Properties (IPs), where they will focus on completely different stories from each other but they each contain similar design elements characteristic of Telltale's style and the tool they use. So our series will have similar elements spread among the series.
Having Telltale pick up the torch and take Adventure games to another level was truly something to see and I always told my friends that if they release anything I drop what I am doing and go play it. I hope that the concepts and new innovations they introduced will not go away but that these will continue to evolve as we and others look to pick up now and continue the work.
I am definitely saddened though that we will not see what happens to Bigby from Wolf Among Us or continue the tale of Batman from his series. There will also not be any closure on what happens to Marty McFly from Back to the Future with its major cliffhanger and lastly we will not continue Tales from the Borderlands (in the top three for me from their games and it was so good it pushed me to get into the original games).
Adventure games will go on as we continue to develop them and push for their continued popularity as a genre that has existed for some time now and has so much more to go.
Best wishes to my friend Tommy Leeds, a talented artist who graduated from the University of Advancing Technology and worked at Telltale Games. Also, best wishes from Live in the Game, LLC to all affected and the industry to get back on their feet and keep doing epic work!
|
0.999966 |
Bangkok is considered to be one of the world''s top tourist hotspots.
Bangkok, known in Thai as Krung Thep Maha Nakhon or Krung Thep for short, is the capital, largest urban area and primate city of Thailand. It was a small trading post at the mouth of the Chao Phraya River during the Ayutthaya Kingdom and came to the forefront of Thailand when it was given the status as the capital city in 1768 after the burning of Ayuthaya. Bangkok has been the political, social and economic center of not only Thailand but for much of South East Asia and Indochina as well. Its influence in the arts, politics, fashion, education, entertainment as well as being the business, financial and cultural center of Asia has given Bangkok the status of a global city. The city''s mix of Thai, Chinese, Indian, Buddhist, Muslim and Western cultures combined with the driving force of the Thai economy makes it increasingly attractive to foreigners both for business and pleasure and has made the city one of the world''s top tourist destinations. The Bangkok special administrative area covers 1,568.7 km2 (606 sq mi), making it the 68th largest province in Thailand. Much of the area is considered the city of Bangkok, therefore making it one of the largest cities in the world. Bangkok is known for its large green sections within the city centre, including the large forest park between Yannawa and Samut Prakan. This part of the city covers an area of over 50 km2.. Bangkok is considered to be one of the world''s top tourist hotspots. According to Travel and Leisure magazine it is Asia''s best tourist destination, the third in the world in 2006 and overall best city in the world in 2008.Bangkok is Thailand''s major tourist gateway, which means that the majority of foreign tourists arrive in Bangkok. The city boasts some of the country''s most visited historical venues such as the Grand Palace, Wat Pho, and Wat Arun.
|
0.961347 |
airship-armada - An orchestrator for managing a collection of Kubernetes Helm charts.
An orchestrator for managing a collection of Kubernetes Helm charts.
133 the new test pod should supercede it.
138 # chart's test pods.
35 # TODO: Validate this object up front in armada validate flow. 41 # TODO: Validate this object up front in armada validate flow.
364 # test pods should not be included in wait operations. 362 # test pods should not be included in wait operations.
471 # Verify that at least 1 release is either installed or updated. 470 # Verify that at least 1 release is either installed or updated.
79 tests and the deprecated, boolean `test` key is enabled. 83 tests and the deprecated, boolean `test` key is enabled.
91 tests and the `test.enabled` key is False. 95 tests and the `test.enabled` key is False.
105 tests and the deprecated, boolean `test` key is disabled. 109 tests and the deprecated, boolean `test` key is disabled.
117 tests and the deprecated, boolean `test` key is disabled. 121 tests and the deprecated, boolean `test` key is disabled.
131 True and the chart group `test_enabled` key is disabled. 135 True and the chart group `test_enabled` key is disabled.
143 True and the deprecated, boolean `test` key is disabled. 148 True and the deprecated, boolean `test` key is disabled.
155 True and the `test.enabled` key is False. 160 True and the `test.enabled` key is False.
169 for a chart's test key. 174 for a chart's test key.
178 the deprecated, boolean value for a chart's `test` key. 183 the deprecated, boolean value for a chart's `test` key.
188 `test.enabled` path. 193 `test.enabled` path.
200 a chart's values using the `test.enabled` path. 206 a chart's values using the `test.enabled` path.
213 `test` key. 220 `test` key.
227 `test` key. 237 `test` key.
241 values are provided (i.e. tests are enabled and cleanup is disabled). 254 values are provided (i.e. tests are enabled and cleanup is disabled).
250 API/CLI) takes precedence over a chart's `test.cleanup` value. 264 API/CLI) takes precedence over a chart's `test.cleanup` value.
1 # Copyright 2019 The Armada Authors.
206 The preferred way to achieve test cleanup is to add a pre-upgrade delete 206 If cleanup is ``true`` this prevents being able to debug a test in the event of failure.
209 action on the test pod.
214 ought to work (https://github.com/helm/helm/issues/3279).
|
0.999927 |
We've all heard the old saying, "An apple a day keeps the doctor away", but what other natural remedies can also keep the doctor away? Keep reading to find out my favorite natural remedies! Personally I trust these much more than medications or traditional medicine and I've tried just about all of these on the list and I've tried my best to include information from both alternative health and wellness sites and more traditional such as Web MD. Always consult your doctor. Nothing in this post constitutes medical information or advice. Remember before we had medicine, we had food!
Tumeric Anti-inflammatory for colon, menstrual cramps, reduce blood sugar, helps prevent Alzheimer's, heal wounds, cancer fighter Tip: try this in your eggs!
"Most Americans are magnesium deficient, which helps to account for high rates of heart disease, stroke, osteoporosis, arthritis and joint pain, digestive maladies, stress-related illnesses, chronic fatigue, etc.
Tips: The form to buy these in in the cleanest most raw and natural form. Stay away from extra processing, added ingredients like salt and sugar. Check the ingredients list and look for the words "raw". Use the application called Fooducate on your smart phone to make sure you're making the best choice when buying food. For fruits, try buying them frozen for optimal taste and freshness and a cheaper price. For oils and supplements, check country of origin and go to a health food store you trust and develop a rapport with the owner/employees-they are an excellent source of information. Good thing is that a lot of these products you probably already have in your cabinets and they're really not that hard to start incorporating more frequently into your daily life. Sources: Dr. Oz Prevention Magazine My Mother & Aunt http://www.mindbodygreen.com/0-6997/10-Reasons-to-Eat-Tahini.html http://www.whfoods.com/genpage.php?tname=foodspice&dbid=72 http://www.webmd.com/balance/bee-pollen-benefits-and-side-effects http://www.doctoroz.com/blog/lindsey-duncan-nd-cn/honey-s-unknown-benefits http://blog.doctoroz.com/oz-experts/herb-of-the-month-neem http://foodforbreastcancer.com/foods/lima-beans http://www.livestrong.com/article/249915-what-are-the-benefits-of-eating-whole-mint-leaves/ http://blog.doctoroz.com/oz-experts/turmeric-golden-spice-better-healt http://www.livestrong.com/article/307518-psyllium-husk-health-benefits/ http://www.epsomsaltcouncil.org/health/ http://health.howstuffworks.com/skin-care/beauty/skin-treatments/care2-health-benefits-magnesium.htm Always consult your doctor. Nothing in this post constitutes medical information or advice.
Have a natural remedy I didn't include? Leave me a comment and let me k now!
|
0.945166 |
How many of you have friends or family members that have types 1 or 2 Gestational diabetes? In the US there are 23.6 million children and adults which is 7.8% of the population in America that have diabetes.
Sushruta Samhita, an ancient Indian physician is who discovered diabetes. It was around 600 B.C., when Samhita connected obesity and sedentary life style with diabetes. Now in the modern day, diabetes is related with excess weight, but can also be hereditary. For instance, type 1 diabetes is called Insulin-Diabetes and it is not gender or race specific, not even age specific.
0.22% of people in the age group of under 20 years of age have diabetes. The amount of children and adolescents that have type 1 diabetes are one in every 400 to 600 people.
In the age group 20 years and up with diabetes are in the 10.7% or 23.5 million range nation wide.
Many may or may not know what this disease does and what effect it has, or that many women also get it during pregnancy. Diabetes is a disorder in the metabolism. Glucose is the main source of fuel for the body. We get glucose from broken down food that we eat. After we digest that food, glucose is passed into the bloodstream and is used by cells for energy. Another part of this process is that insulin must be present so the glucose can enter the cells. The part of the body that produces insulin is the pancreas, it produces the right amount of insulin so the glucose can move from the blood into the cells. The problem that people have with diabetes is that their pancreas produces too little or no insulin at all. Having diabetes is not just having complications in the pancreas, there are many more diseases that can develop with diabetes like for example: heart disease, kidney failure, and nerve damage.
|
0.963765 |
Which is the best Australian bank for startups to work with?
Which is the best Australian bank for startups to work with? Citibank is good for handling foreign currency and has a good interest rate, BUT, far out, they are horrendous to deal with. Simple things like getting a new CC or changing address are almost impossible. Any advice? Or are the big banks largely the same?
If you are a startup Commbank is pretty good and has a lot of good features. However I would highly recommend setting up an Hong Kong company and getting a HSBC multi currency account setup with them. It makes life a lot easier when it comes to international payments.
While most people think having a Hong Kong company is just a tax dodge, reality is for anyone doing business in multiple countries it simply makes sense. It only costs about ~$2,000 to have setup (both company and account) and can make life easier.
Word of warning though, if you just setup a Hong Kong company and live & work from Australia only you will be questioned by the tax office.
If good old fashioned fast and personalised banking service is important then you can't go past Bendigo Bank. In all my dealings you less fell like a number and stuff is just sorted very promptly. Very approachable and efficient operation compared to the majors.
|
0.950562 |
Three World Trade Center (also known as 175 Greenwich Street) is a skyscraper under construction as part of the rebuilding of the World Trade Center site in Lower Manhattan, New York City. The project lies on the east side of Greenwich Street, across the street from the previous location of the Twin Towers, which were destroyed during the September 11 attacks in 2001. Pritzker Prize-winning architect Richard Rogers, of Rogers Stirk Harbour + Partners, was awarded the contract to design the building, which will have a height of 1,079 ft (329 m) tall with 80 stories. As of October 2013[update], its below-grade foundations are complete, and several floors have been built above street level. The building is slated to be completed in 2018.
Marriott World Trade Center was previously located at the address until its destruction in 2001. It was a 22-story hotel, a steel-frame and 825 rooms. It had a roof height of 242 feet (74 m). Construction began in 1979 and it opened in July 1981 as the Vista International Hotel.
The Marriott World Trade Center was a 22-story steel-framed hotel building with 825 rooms. It had a roof height of 73.7 m (242 ft) and was designed by Skidmore, Owings & Merrill. Its structural engineer was Leslie E. Robertson Associates with Tishman Construction serving as the main contractor. Construction began in 1979. It opened in July 1981 as the Vista International Hotel and was located at 3 World Trade Center in New York City.
The Vista International Hotel was the first hotel to open in Lower Manhattan since 1836. The hotel was originally owned by the Port Authority of New York and New Jersey and KUO Hotels of Korea with Hilton International acting as management agent. It was sold in 1995 to Host Marriott Corporation.
The hotel was connected to the North and South Towers, and many went through the hotel to get to the Twin Towers. The hotel had a few establishments including The American Harvest Restaurant, The Greenhouse Cafe, Tall Ships Bar & Grill, a store called Times Square Gifts, The Russia House Restaurant and a Grayline New York Tours Bus ticket counter, a gym that was the largest of any hotel in New York at the time, and a hair salon named Olga's. The hotel also had 26,000 square feet (2,400 m2) of meeting space on the entire 3rd floor along with The New Amsterdam Ballroom on the main floor, and was considered a four-diamond hotel by AAA.
On February 26, 1993, the hotel was seriously damaged as a result of the World Trade Center bombing. Terrorists took a Ryder truck loaded with 1,500 pounds (682 kilograms) of explosives and parked it in the One World Trade Center parking garage, below the hotel's ballroom. At 12:18pm (Eastern Time), an explosion destroyed or seriously damaged the lower and sub levels of the World Trade Center complex. After extensive repairs, the hotel reopened in November 1994.
On September 11, 2001, the hotel was at full capacity, and had over 1,000 registered guests. In addition, the National Association for Business Economics (NABE) was holding its yearly conference at the hotel.
When American Airlines Flight 11 crashed into the North Tower (1 WTC), the landing gear fell into the roof of the Marriott hotel. There were many eyewitness accounts from firefighters who went up the stairs in the Marriott hotel to the second floor. Firefighters used the lobby as the staging area, and were also in the hotel to evacuate rooms with guests who were believed to be still inside the hotel. Firefighters also reported bodies on the roof from the people who had jumped or fallen from the burning towers.
The collapse of the South Tower (2 WTC) destroyed the center of the hotel, and the collapse of the North Tower destroyed the rest of the hotel aside from a small section that was farthest from the North Tower. Fourteen people who had been trying to evacuate the partially destroyed hotel after the first collapse managed to survive the second collapse in this small section. The section of the hotel that had managed to survive the collapse of the Twin Towers had been upgraded after the 1993 bombing.
As a result of the collapse of the Twin Towers, the hotel was destroyed. Only the south part of three stories of the building were still standing, all of which were gutted. In the remnants of the lobby, picture frames with the pictures were still hanging on the walls. Approximately 40 people died in the hotel, including two hotel employees and many firefighters who were using the hotel as a staging ground. In January 2002, the remnants of the hotel were completely dismantled.
The building and its survivors were featured in the television special documentary film Hotel Ground Zero, which premiered on September 11, 2009 on the History Channel.
3 World Trade Center was originally planned for a podium of seven stories for trading floors, with a 73-floor office tower rising from it. The diamond braces initially planned for the front and rear faces of the building have been dropped from the design and the tower is to be built without them. However, the diagonal bracing on the sides will remain. The four spires in the original design gave the tower a pinnacle height of 1,240 feet (378 m), meaning it would have become the third-tallest building in New York City by pinnacle height, but the spires were later removed from the design, thus reducing the height by about 88 feet (approximately 26.8 m). The total floor space of the building is planned to include 2,000,000 sq ft (190,000 m2) of office and retail space. The building's groundbreaking took place in January 2008, and at that time it was scheduled to be completed by 2014. The structural engineer for the building is WSP. In November 2010, three PureCell fuel cells were delivered at the World Trade Center site which together will provide about 30% of the tower’s power.
On May 11, 2009, it was announced that the Port Authority of New York and New Jersey was seeking to reduce 175 Greenwich Street to a "stump" building of approximately four stories. The overall plan, which also called for a similar reduction in height for 2 World Trade Center and the cancellation of 5 World Trade Center, would halve the amount of office space available in the fully reconstructed World Trade Center to 5,000,000 sq ft (460,000 m2). The agency cited the recession and disagreements with developer Larry Silverstein as reasons for the proposed reduction. Silverstein opposed the plan, filing a notice of dispute on July 7, 2009. By doing so, the development firm began a two-week period during which renegotiated settlements and a binding arbitration regarding the construction of the four World Trade Center towers could be made. Silverstein Properties, which has paid the Port Authority over US$2.75 billion in financing, noted the organization’s inability to meet construction obligations in its official complaint.
On October 2, 2012, the large advertising and media company GroupM was confirmed by several sources to be in the preliminary negotiations to anchor 3 World Trade Center in a deal that would allow construction to begin on the planned 80-story office tower. The lease would be about 550,000 square feet in size, a large enough commitment to qualify the project for up to $600 million in public benefits in the form of a mix of equity and loan guarantees from the city, state and Port Authority.
By the beginning of 2012, Silverstein Properties and the Port Authority of New York and New Jersey reached an agreement to only build 3 World Trade Center to seven stories, unless tenants can be found to fund the building. According to a March 2010 agreement between Silverstein Properties and the Port Authority, Silverstein Properties must find tenants to lease 400,000 square feet of the building and it must raise US$300 million in private financing in order to receive additional funding. If Silverstein Properties meets those triggers, then the Port Authority, City of New York, and New York State will provide an additional US$390 million towards the tower's completion. Silverstein Properties also needs to provide financing for the remaining cost of the tower before it can be completed. The existing foundation of the tower was built entirely with insurance proceeds, and until Silverstein Properties meets the requirements. The agreement also implemented a "cash trap" to make sure that public investments are paid off before Silverstein Properties makes any profits from the tower.
The tower portion of 3 World Trade Center will be fully built after meeting the requirements. Silverstein Properties is optimistic that leases will be signed. A spokesperson speaking on the issue of rebuilding the site commented: "Three WTC should be up by 2015; although; we do have one milestone to hit: We need to get a 400,000-square-foot tenant in order to get a financing backstop that makes sure we will complete that building. So, that’s a question mark, and it’s a major priority of Silverstein Properties". The Port Authority believes that the 2010 agreement will allow market demand to drive the construction of the towers and help to limit public investment since the Port Authority has other projects that need attention in the region. In late June 2012, David Zalesne, president of Owen Steel, confirmed that construction of the tower will continue and that Owen Steel has been selected to provide the structural steel for the building. In July 2013 it was reported that GroupM had signed on with Silverstein Properties as the building's anchor tenant, which would allow the tower to resume construction in late 2013.
A subsidy, which would have doubled the loan given to the construction of 3 World Trade Center, was postponed by the Port Authority of New York and New Jersey in April 2014 until June 2014.
By February 2012, the ground floor concrete was almost done and the lower podium had reached the 5th floor. On May 18, 2012 a construction update was released which stated that the superstructure work was continuing, and that forms, rebar, and concrete placement work was also continuing. Additionally, utilities for the site were being installed. The construction agency expected the lower podium to reach a capped height of 7 stories by September 2012. As of August 2013, construction on the podium is nearly complete, and work on the rest of the podium and the tower will continue in January 2014, when the tower crane is returned to the site.
On June 25, 2014, the Port Authority of New York and New Jersey reached an agreement with Larry Silverstein to finance the completion of 3 World Trade Center, and construction of the tower resumed. The tower crane has been returned and the new anticipated completion is late 2017.
During 2015 the design was modified and the height reduced slightly from 1,168 feet (356 m) to its current height of 1,079 feet. As of June 2015[update], 3 World Trade Center's core has risen to the 19th floor and steel to the 14th floor. On June 28, 2015, one more tower crane has been built on the site. Another crane arrived in July, bringing the total to four cranes. On May 20, 2016, the tower's concrete core reached the symbolic height of 1000 feet, thus officially reaching supertall status and exceeding the roof height of neighboring 4 World Trade Center.
Construction of Three World Trade Center as of May 2012. A portion of the National 9/11 Memorial's South Pool can be seen in the foreground.
Construction of Three World Trade Center as of April 2014.
↑ 3 World Trade Center at StructuraeLua error in Module:WikidataCheck at line 22: attempt to index field 'wikibase' (a nil value).
Wikimedia Commons has media related to Three World Trade Center.
This page was last modified on 30 May 2016, at 20:40.
|
0.988168 |
I started my period almost a year ago. I am getting stretch marks on my thighs and back. My stomach is getting flatter. What's happening?
Stress and stretching of connective tissue (supporting tissue of the skin) cause damage to collagen and elastin, which lead to scarring and hence cause stretch marks. The stress and stretching in this case is due to due to pubertal growth spurts you are experiencing.
|
0.957322 |
Attempting to make a cranberry galette with some leftover puff pastry. Do I need to worry about the bottom getting too soggy?
My plan was to cook the cranberries down until they get jammy, cool, fold puff pastry around and then bake - then I started to wonder if something the consistency of a thick cranberry sauce would run through the bottom of a puff pastry.
Don't worry about over-thinking it! Always best to plan out a baking project a bit before you dive in! Cranberries can hold a lot of juice, especially if you're using frozen - but cooking them should take care of that no problem! The main thing that will prevent it from getting soggy is baking it at a high enough temperature - at least 400 and even 425 degrees Fahrenheit would be great - you'll end up with a crispy, golden base for sure! Galettes are pretty easy to get the bottom browned because they have direct contact with the base of the pan. Putting it on the bottom rack of your oven helps too!
|
0.99696 |
Recent experimental evidence has suggested that an increase in cardiovascular activity resulted from physical exercise can improve cognitive function . We have demonstrated that a short duration of cardiovascular activity can improve executive function . In order to examine the underlying neurophysiological mechanisms that are related to the improvement in cognitive function, we employed optical imaging of hemodynamic activity as a measure of oxygen consumption and oxygen demand in the prefrontal cortex (PFC), so that we can identify whether the improved cognitive function is associated with an increased oxygen perfusion to the brain. It is well known that the PFC is involved in executive function of decision-making that resolves conflicts. We use the conventional Stroop Test to identify the improvement of executive functions in relation to the increased cardiovascular activity. We hypothesize that the improvement in cognitive activity is contributed in part by the increased oxygen perfusion to the PFC. In order to verify this neurobiological mechanisms underlying the improvement in cognitive performance, we use near-infrared spectroscopy (NIRS) to measure not just the neural activation patterns, but also the oxygen delivery to the cortical tissues vs. oxygen consumption by the tissue. Metabolic activities of neurons (such as neural firings and synaptic activations) are correlated with oxygen consumption (oxygen demand) by the neural tissue. On the other hand, oxygen delivery to the tissue is correlated with the oxygen perfusion to the brain (such as vasodilation), which may or may not related to neural activities or cognitive processing. Thus, it is important to identify whether the improved cognitive functions are related to oxygen delivery and/or oxygen demand. We have demonstrated that optical imaging using NIRS can detect neurohemodynamic responses as well as neural activation and deactivation patterns in the motor cortex [3, 4]. We have found that under highly demanding neural processing conditions, oxygen delivery may not keep up with the oxygen demand. This results in a transient reduction of oxygen supply when oxygen extraction by the neural tissue exceeded the available oxygen supply by hemoglobin molecules, as revealed by the hemodynamic response using NIRS. This reduction of oxy-hemoglobin supply to the neural tissue could be misinterpreted as neural deactivation by fMRI (functional magnetic resonance imaging), which only detects deoxy-hemoglobin level, whereas fNIRS (functional NIRS) can detect both oxy- and deoxy-hemoglobin levels. The ability for fNIRS to detect both oxy- and deoxy-hemoglobin allows us to differentiate the difference between neural deactivation (revealed by a decrease in oxygen demand or metabolic rate of the neural tissue) and the reduction of oxygen supply (revealed oxy-hemoglobin level) caused by an increase in oxygen extraction (revealed by deoxy-hemoglobin level). Based on this additional information provided by fNIRS, we recorded similar hemodynamic responses in the PFC in this study. Human subjects are used to perform stationary bicycle exercise to increase the cardiovascular activity. Optical imaging of the prefrontal cortex is used to measure the hemodynamic response before and after exercise using NIRS recordings. The performance of the executive function is measured by the Stroop Test before and after exercise. The results showed that the oxy-hemoglobin delivery in the prefrontal cortex increases with the improvement of executive function by comparing the cognitive performance before and after exercise. The processing speed of resolving conflicts in the Stroop Test is also improved by at least 10% after 30 min of exercise, which is correlated with the hemodynamic response in the PFC. It showed that the increase in oxygen delivery is associated with the improvement in executive function in the PFC. This suggests under an increase in cardiovascular activity can increase the oxygen perfusion to the brain, which can provide an improvement in cognitive processing speed in a highly demanding cognitive task (such as the Stroop Test that requires conflict-resolution processing) that requires a transient demand of oxygen that is not kept up with by the oxygen supply during resting condition (without the extra oxygen perfusion and vasodilation caused by exercise).
|
0.998946 |
This is a post that I have been struggling to write recently. I am not sure how to put my thoughts down without sounding condescending or rude. I think what I am about to say is really important, and it is one of the steps that we should be taking to help build our children up, especially our little girls, to be strong and powerful people.
We are always talking about how we want the best for our children, and sometimes it's the smallest details that can affect them the most. Right now there are many campaigns that have been designed for us -- women and men -- to look at how we treat our young girls and the expectations we have of them as a society -- check out Always "Like a Girl" campaign.
So here is what has been on my mind lately: It is how we and society are teaching our children -- especially girls -- how a princess behaves, when, in reality, the behaviours that we are teaching them are far from the truth.
Over the last few years, I have noticed an increase in clothing directed towards girls that talks about being a princess -- like a t-shirt hat says: "Only a true princess could get away with what I do." A lot of the clothing that is available for young girls does not have the most positive attitude, and makes it seem as if it is okay to do things that would usually be qualified as mean or rude. Sure, one might argue that it's just a t-shirt, but something that small can also have a great impact on a person.
I think that what we teach our children -- through books, television, and the way we talk -- about princesses makes them think that they only wear beautiful ball gowns, live in castles, and get to do whatever they want; but there are many qualities in real princesses that we should instill in our children above all else.
If you were to ask any princess out in the real world, she would tell you how much work goes into being a princess. Not only are you supposed to be well groomed and well dressed -- there's nothing wrong with being clean -- but you must smile at everyone and be happy and cheerful towards any one you meet. These interactions could happen at any time, and even if a princess is upset about something she must still put a smile on her face and converse with other people.
A princess is also always willing to help others and takes on several philanthropic endeavors to show the people that she cares about them and the society that they live in. A lot of photographs that we see of princesses are in these kinds of situations, and they always have a smile on their face and they seem happy with the support and care that they are giving, especially if it is something that is close to their hearts.
Yes, princesses wear nice clothes. Yes, princess live in castles and live extravagant lives taking vacations in warm places and shopping in expensive stores; but, this is not all that we should be teaching our children. Princesses are kind to others, slow to anger, show compassion and are, therefore, eager to help people. These, and many others, are the values that I'd rather instill in my children.
**Don't get me started on the slogans and phrases on clothing for boys. Some of it actually disgusts me and I refuse to buy anything of the sort for my son. He will not be taught to have any of the negative thoughts printed on those t-shirts, especially the ones that are geared towards women -- like "Chicks dig me".
|
0.999842 |
How to say "France is to the south of England" in French?
1. La France est au sud de l'Angleterre.
What's the purpose of your visit to France?
I'd like to know how to send money to France.
|
0.999146 |
Can I get Pregnant with Gastritis?
The best way to improve your chances of conceiving with this condition is to make sure that you are being properly treated for it before you begin trying to conceive. You should also review any medications that you are taking with your doctor in order to ensure that they are safe for a pregnancy.
To help relieve the symptoms of gastritis, one of the best things that you can do is to lower the amount of stress that you are feeling. You should also look at the food that you are eating and make sure that you are making healthy choices. Living a healthy lifestyle is also beneficial for women with gastritis.
There are several treatments that you can choose to conduct at home, but it is always best to talk to your doctor before you begin any of them, especially if you are trying to become pregnant. Your doctor will be able to determine the best treatment option for you and your needs.
|
0.953767 |
The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in the Manchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Robert H. Dennard invented dynamic random-access memory (DRAM) in 1968; this allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away.
The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is more expensive to produce, but is generally faster and requires less dynamic power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, etc. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction code.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a '1' or a '0' in the cell. However, this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
As a common example, the BIOS in typical personal computers often has an option called “use shadow BIOS” or similar. When enabled, functions relying on data from the BIOS’s ROM will instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to the operating system if shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs.
Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and approaches utilizing Tunnel magnetoresistance. Amongst the 1st generation MRAM, a 128 KiB (128 × 210 bytes) chip was manufactured with 0.18 µm technology in the summer of 2003. In June 2004, Infineon Technologies unveiled a 16 MiB (16 × 220 bytes) prototype again based on 0.18 µm technology. There are two 2nd generation techniques currently in development: thermal-assisted switching (TAS) which is being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working. Nantero built a functioning carbon nanotube memory prototype 10 GiB (10 × 230 bytes) array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, however, remains to be seen.
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by 3D computer chips that reduce the distance between the logic and memory aspects that are further apart in a 2D chip. Memory subsystem design requires a focus on the gap, which is widening over time. The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed in order to deal with the widening of the gap, and the performance of high-speed modern computers are reliant on evolving caching techniques. These can prevent the loss of processor performance, as it takes less time to perform the computation it has been initiated to complete. There can be up to a 53% difference between the growth in speed of processor speeds and the lagging speed of main memory access.
↑ Gallagher, Sean. "Memory that never forgets: non-volatile DIMMs hit the market". Ars Technica.
↑ "IBM Archives -- FAQ's for Products and Services". ibm.com.
↑ Williams, F.C.; Kilburn, T.; Tootill, G.C. (Feb 1951), "Universal High-Speed Digital Computers: A Small-Scale Experimental Machine", Proc. IEE, 98 (61): 13–28, doi:10.1049/pi-2.1951.0004.
↑ "Shadow Ram". Retrieved 2007-07-24.
↑ "Tower invests in Crocus, tips MRAM foundry deal". EETimes.
↑ The term was coined in .
↑ "Platform 2015: Intel® Processor and Platform Evolution for the Next Decade" (PDF). March 2, 2005.
↑ Rainer Waser (2012). Nanoelectronics and Information Technology. John Wiley & Sons. p. 790. Retrieved March 31, 2014.
↑ Chris Jesshope and Colin Egan (2006). Advances in Computer Systems Architecture: 11th Asia-Pacific Conference, ACSAC 2006, Shanghai, China, September 6-8, 2006, Proceedings. Springer. p. 109. Retrieved March 31, 2014.
↑ Ahmed Amine Jerraya and Wayne Wolf (2005). Multiprocessor Systems-on-chips. Morgan Kaufmann. pp. 90–91. Retrieved March 31, 2014.
↑ Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology. National Academy Press. 1999. p. 110. Retrieved March 31, 2014.
↑ Celso C. Ribeiro and Simone L. Martins (2004). Experimental and Efficient Algorithms: Third International Workshop, WEA 2004, Angra Dos Reis, Brazil, May 25-28, 2004, Proceedings, Volume 3. Springer. p. 529. Retrieved March 31, 2014.
|
0.999972 |
Buying or selling a company or division? We draft and review documents to protect your interests in the deal.
An acquisition, also known as a takeover or a buyout or "merger", is the buying of one company (the 'target') by another. An acquisition, or a merger, may be private or public, depending on whether the acquiree or merging company is or isn't listed in public markets. An acquisition may be friendly or hostile. Hostile acquisitions can, and often do, turn friendly at the end, as the acquiror secures the endorsement of the transaction from the board of the acquiree company.
The acquisition process can be very complex, with many aspects and elements influencing the outcome. There are also a variety of structures used in securing control over the assets of a company, which have different tax and regulatory implications. Proper due diligence and legal documentation is critical to the process.
|
0.907248 |
There are four laws of thermodynamics. But instead of having a first, second, third and fourth law we have a zeroth law to start with!
define fundamental physical quantities (temperature, energy, and entropy) that characterise thermodynamic systems at thermal equilibrium.
forbid certain phenomena (such as perpetual motion).
If two systems are in thermal equilibrium with a third system, they are in thermal equilibrium with each other.
This law helps define the notion of temperature. When in thermal equilibrium there is no net heat transfer. heat moves from hot to cold areas so they must be at the same temperature.
When energy passes, as work, as heat, or with matter, into or out from a system, the system's internal energy changes in accord with the law of conservation of energy.
This means that perpetual motion machines (of the first kind) are impossible.
In a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases.
Equivalently, perpetual motion machines (of the second kind) are impossible.
The second law of thermodynamics indicates that natural processes lead towards spatial homogeneity of matter and energy, and especially of temperature. This movement implies that processes are irreversible.
The existence of a quantity called the entropy of a thermodynamic system is implied - choas or complexity - the 'mixing up' of materials - a movement away from order, separation into a single kind and regimentation.
When two isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium with itself but not with each other, are then allowed to interact, they will eventually reach a mutual thermodynamic equilibrium.
The sum of the entropies of the initially isolated systems is always less than or equal to the total entropy of the final combination - it is rather like haveing the contents of two drawerers merged into one big drawer - more confusion - more dificult to locate things.... less order!.
Equality can only occur if the two original systems have all their respective intensive variables (temperature, pressure etc.) equal; then the final system also has the same values.
This statement of the second law is founded on the assumption, that in classical thermodynamics, the entropy of a system is defined only when it has reached internal thermodynamic equilibrium (thermodynamic equilibrium with itself).
The second law is applicable to a wide variety of processes, reversible and irreversible. All natural processes are irreversible. Reversible processes do not occur in nature. A good example of irreversibility is in the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies initially of different temperatures come into thermal connection, the heat always flows from the hotter body to the colder one.
Entropy is given the symbol 'S'.
With the exception of non-crystalline solids (glasses) the entropy of a system at absolute zero is typically close to zero, and is equal to the logarithm of the product of the quantum ground states.
|
0.957415 |
Almost nothing is known about the life of Dionysius before his election as Bishop of Milan, which took place in 349. Dionysius was probably of Greek origin. He was a friend of the Roman Emperor Constantius II before being elected Bishop of Milan.
The historical period in which Dionysius lived was marked by clashes between Arians and the Orthodox supporters of the faith of the Synod of Nicaea. Even Emperor Constantius II favored Semi-Arian doctrines. In 355 Pope Liberius requested the Emperor to convene a Synod in Milan, which was held in the newly erected Basilica Nova (or Basilica Maior or St. Tecla). The Synod however did not accomplish the hopes of the Pope due the overwhelming number of Arians bishops present and the enforced absence of the champion of the Nicaean faith, Eusebius of Vercelli, thus it was deemed a Robber Synod. Initially Dionysius seemed ready to follow the Arians in condemning the Archbishop Athanasius of Alexandria, who was accused not of heresy but of lese-majesty against the Emperor. With the arrival of Eusebius the situation changed. Eusebius requested an immediate subscription of the Nicaean faith by the bishops. Eusebius, the Papal legate Lucifer of Cagliari and Dionysius signed, but the Arian bishop Valens of Mursia violently shredded the act of faith.
Constantius, unaccustomed to independence on the part of the bishops, moved the synod to his palace, and grievously maltreated Eusebius, Lucifer and Dionysius, which were all three exiled (also Pope Liberius was shortly later exiled). Dionysius was exiled to Caesarea of Cappadocia and was substituted as bishop of Milan by the Arian Auxentius appointed by the Emperor.
Dionysius died in exile between 360 and 362. According to a late tradition (12th cent. manuscript of Epistle 197 by Basil the Great that may contain forged elements about Dionysius), Ambrose sent in 375 or 376 a delegation to recover the relic of Dionysius, which were kept by Saint Basil the Great, and to translate it to Milan. Even if it is historically difficult to determine when exactly the relics of Dionysius were translated to Milan, a primary source states that the relics were already in Milan in 744. A shrine dedicated to Dionysius was erected in Milan near Porta Venezia but it was destroyed in 1549, rebuilt nearby and definitely knocked down in 1783 to leave space for the new gardens. The relic of Dionysius was translated to the Cathedral of Milan in 1532, where it is still today.
|
0.996048 |
You will agree with me about the best qualities of HR Management Software in lahore-karachi-islamabad-pakistan when I say: Being a human resources leader is one of the most difficult and rewarding positions in the world. Whether you are just starting out or have a long experience in a particular field, knowing what quality effective software should have is an integral part of your success. The best qualities of HR Management Software in lahore-karachi-islamabad-pakistan has the ability to influence, motivate and allow HR managers to contribute to the effectiveness and success of an organization or group of which they are members. The good quality software is about acquiring good skills. It allows you to be a “role model” for a team in any environment. HR Software in lahore-karachi-islamabad-pakistan has the best leadership qualities to manage HR roles.
HR Management Software in lahore-karachi-islamabad-pakistan can help your HR department to achieve its goals and objectives by streamlining administrative processes and taking advantage of their tasks, such as hiring, training and maintaining their workforce. In general, the solution helps improve the productivity and efficiency of human resources.
Trust is the main basis by which leadership of HR will grow. HR Management Software in lahore-karachi-islamabad-pakistan brings responsibility that requires precision and makes decisions in an appropriate manner. To bring or build trust, you need to surround yourself with the right people.
HR Solutions in lahore-karachi-islamabad-pakistan involves making critical decisions that require courage. One of the ways to build courage is to act and be willing to start. A strong human resource does not expect someone else to do something. The leaders are oriented to the attack. To develop value, we have to get involved in certain difficult tasks, try new things and solve some difficult problems that other employees leave unresolved.
Trust is required in the workplace every day so that each employee can feel that they are there for the right reasons, working to achieve common goals with Performance Management Software in lahore-karachi-islamabad-pakistan. To create a reliable work environment for employees, a leader needs mainly 3 elements of trust: give confidence first, communicate effectively and authentically show. Depending on the skill level of a team, the leader involves team members in decision making.
To develop self-discipline, a great challenge is to eliminate any tendency to make excuses. Leadership discipline is less about punishing and rewarding others, but rather about self-control, inner calm, and external resolution. HR Management Software in lahore-karachi-islamabad-pakistan plays an important role in their ability to be self-disciplined.
It is the process of planning and organizing that how to divide your time between specific activities. With Attendance Software in lahore-karachi-islamabad-pakistan and organizational skills, they can imitate powerful techniques for their team members, making everyone more productive. The key factors in sound time management are the awareness of the vision, the establishment of specific and realistic objectives, the establishment and communication of priorities and the discipline to follow the plan.
HR System in lahore-karachi-islamabad-pakistan handles confidential information, which is not an easy task. You must put the interests of your follower before your interests. Honesty plays an important role that every leader must have to have one of the best leadership qualities.
Becoming a good leader does not mean that you always have to be in the spotlight. An important character of an HRIS Software in lahore-karachi-islamabad-pakistan is someone who listens to ideas, suggestions, and comments from other people and builds on them. Use HR software to inform and challenge your employees. By creating a trusting work environment with great opportunities, you will increase the start and self-esteem of your employees immediately. HR Management Software in lahore-karachi-islamabad-pakistan is one of its kind that offers the “Performance Management” module where you can measure the performance, goals, and objectives of your employees.
|
0.923191 |
Pathfinder is often conflated with D&D, 4e I think, but it isn't actually named "Dungeons & Dragons." What is Pathfinder's relationship to D&D, and how does it fit in with the various D&D editions?
Is Pathfinder D&D? No, but kind of.
Pathfinder is published by Paizo, who does not own the rights to Dungeons & Dragons. Those rights are owned by Wizards of the Coast, who currently publish D&D 5e. But Pathfinder is a spin-off of Dungeons & Dragons, specifically D&D 3.5e, and is extremely similar to that game in many ways. Playing a game of “3.PF,” using material from both rulesets, is quite possible and popular.
How and why this came to be, however, requires a history lesson.
Wizards of the Coast released the foundation of the D&D 3.5e ruleset under the Open Game License, which was very, well, open about how much of it could be re-used. This led to a huge explosion of third-party content for 3.5e, and 3.5e lived a rather-long life as these things go. There was a ton of material for it, the people playing it had gotten very used to dealing with, or even getting attached to, its myriad problems.
At this time, Paizo published the Dragon and Dungeon magazines under license from Wizards of the Coast. They also published a fair amount of their own adventures for the 3.5e ruleset, under the Open Game License.
Then Wizards of the Coast released D&D 4e. The fourth edition of the game was a massive departure from previous editions of D&D, and was extremely controversial. Many players had no desire to switch to 4e, and continued playing 3.5e. Some even decided they didn’t like Wizards of the Coast’s D&D altogether, and went back to older editions of D&D. And many did play 4e, and there are some hints that 4e did relatively well in bringing new players into the game.
So D&D had fractured its fanbase, and there were a lot of people playing D&D but not playing the edition of D&D that Wizards of the Coast was actually publishing.
At the same time, Wizards of the Coast got a lot more possessive with its property. They did not renew their Dragon and Dungeon licenses with Paizo, again publishing those in-house, and they did not release 4e under the OGL. Instead, they created and used the Game System License, which is vastly more restrictive than the OGL was. This made it nearly impossible to develop 3rd-party content for D&D 4e.
This put Paizo into a very tight spot: with their magazine revenue taken away, the latest edition of D&D hostile to third-party content, and the edition of D&D that was their bread and butter, 3.5e, aging and slowly dying, they had a serious problem.
Pathfinder was Paizo’s answer. It was based on the open game content from D&D 3.5e, and pushed hard to capture the market of people who refused to play 4e and were sticking with 3.5e. By promising 3.5e-but-better, and by delivering fresh content, Paizo could keep 3.5e alive, and therefore continue to make adventure material and maintain that revenue stream.
This worked. Through a phenomenal hype machine, Paizo could offer a game system that amounted to a few houserules applied to 3.5e, call it “better,” and capture a pretty large market share. It cost them relatively little to do it, and it allowed them to continue to publish their adventure modules, which were their real focus and interest. Extensions to the Pathfinder system (classes, feats, and so on) were enabled through low-paid freelancers with minimum editorial oversight, and allowed Paizo to keep Pathfinder “alive” through a blistering release schedule, again at relatively low cost. And so they could focus on selling adventures.
Paizo wasn’t the only company to notice the fractured D&D fanbase. Numerous other games, labeled “OSR,” came out to try to capture those players who ditched not only 4e, but 3.5e as well. So in addition to Pathfinder being a spin-off of 3.5e, there are other games that are spin-offs of or inspired by older editions, from before Wizards of the Coast bought the rights to D&D.
The story of Pathfinder suggests that D&D 4e was a complete disaster; that’s certainly how many Pathfinder fans view it, and probably also Paizo themselves. However, it’s not really accurate: D&D 4e did well enough, and again did particularly well with new players, relative to Pathfinder mostly focusing on old players who didn’t like 4e.
As a game designer, I will also say that D&D 4e is easily one of the most tightly-designed RPGs in existence. Other editions of D&D aren’t even playing in the same league. That’s not everything, not by a long shot, but a lot of the criticisms leveled against it were based on perception from quick reads of the book, and not from actual play.
But there were also a number of large problems. Some of it was poor planning, some of it was pressure from Hasbro (who owns Wizards of the Coast) to cut costs on D&D, at least part of it was a murder-suicide (!) by one of the lead developers of a 4e virtual tabletop, killing not only himself and his wife, but also that project and a lot of Wizards’ plans for 4e.
In the end, 4e ended up losing support from Wizards of the Coast, and even if you liked it sooner or later the fact that there was new Pathfinder content and no new D&D content meant a lot more people switched to Pathfinder.
D&D 5e was an attempt to recapture the player base that had been lost to Pathfinder and the OSR. It undid a whole lot of changes made by D&D 4e. It embraced an “old-school” playstyle to a large extent. It also put a huge emphasis on being simple, easy to learn and play, and welcoming to new players, which is not something any of the other games mentioned here can say.
And it has been extremely successful.
No numbers are known here, but recent years have been some of D&D’s best—and that goes all the way back to the original editions in the 70s and 80s that became an international phenomenon.
Paizo is currently testing their second edition of Pathfinder. It is surprisingly 4e-like in a number of ways, which is somewhat ironic considering that Pathfinder was written as a response to 4e in the first place. (It also has a number of 5e-like features, and of course a whole lot of it is unique.) Perhaps most notably, it’s a large departure from PF 1e, greatly changing the game in a large number of ways.
This has been controversial. They’re still in playtesting (and uncharacteristically open to feedback, from what I can tell), but there is a risk here for Paizo that they will follow in 4e’s footsteps—clearly not their goal. Time will have to tell on that, though.
Technically: No, Pathfinder is not D&D.
Colloquially: Yes, many would consider Pathfinder a form of D&D, as Pathfinder (1e) is a direct descendant of D&D 3.5, so interconnected that many refer to it as "3.75" or "3.P". Be aware that purists on both sides may disagree with this answer, as it is a bit of a contentious issue, and a lot of people feel a need to point out the technical perspective that no, PF is not D&D.
The reason many would consider it D&D and label it as "3.75" or similar is fairly simple. Upon the creation of 4e, many considered the diverging nature of the game to be very far from what "felt" like "D&D", and at the same time, Wizards of the Coast dropped a lot of work that they had done with Paizo (such as the at the time age-old Dragon Magazine), "tightening up" their own hold on the IP and the constituent parts of the overall franchise.
This lead Paizo, spurred on by a lot of people disappointed in the development of 4e, to create Pathfinder, which many fans came to consider "more D&D than D&D".
But again, no, not technically D&D. Different company, different name, different owners.
Pathfinder is a continuation or offshoot of the 2003 D&D 3.5 ruleset, by the former publishers of Dragon magazine.
Wizards of the Coast (WotC) released Dungeons & Dragons third edition (3e) in 2000, having bought out D&D's original publishers, TSR, in 1997. Wizards continued to publish TSR's official Dragon magazine and Dungeon magazine until September 2002, when they leased the magazine rights to a third party named Paizo. Rumour at the time was that WotC's corporate bosses insisted magazines were no longer profitable, something Paizo would go on to disprove.
In March 2003, Paizo's Dungeon magazine (Issue #97) published the first module in the Shackled City Adventure Path, a new concept where the magazine released a continuing series of adventure modules, one per issue, which took the characters from levels 1 to 20.
Shortly after this, in July 2003, WotC released a revised D&D third edition (3.5). Paizo would go on to release two more Adventure Paths: Age of Worms, and Savage Tide.
In April 2007, WotC announced that it discontinuing their lease on the magazines. The public officially learned why in August 2007, with WotC's announcement of D&D 4th edition. The final print issue of both Dragon magazine and Dungeon magazine were in September 2007.
Many people still wanted to play D&D 3.5 adventures, but the terms of Paizo's lease forbade them from simply continuing to publish their own magazines under another name. However, it did not forbid them from publishing a continuing series of adventure modules compatible with D&D 3.5. Paizo also held significant goodwill with former Dungeon magazine freelance writers and artists.
The result was the Pathfinder adventure path series, beginning in August 2007 with Rise of the Runelords Adventure Path. Initially, this series used the D&D 3.5 ruleset, which remained popular even after WotC's release of D&D 4th edition in June 2008.
However, now that D&D 3.5 was out of print, Paizo decided to publish their own variant of D&D 3.5, known as the Pathfinder Roleplaying Game or Pathfinder RPG. This was possible thanks to the Open Game License, which allows third parties to republish much of the D&D 3.5 rules.
This ruleset, Pathfinder RPG, is now usually what people are talking about when they say Pathfinder. It was released in August 2009, along with the first part of the fifth Pathfinder Adventure Path, Council of Thieves, which was the first to use the Pathfinder RPG instead of D&D 3.5.
D&D 3.5 and Pathfinder are overall quite similar, except that Pathfinder has numerous changes or improvements. It has been nicknamed "D&D 3.75" as a result. It's sufficiently compatible that you can use D&D 3.5 material with Pathfinder RPG with little or no conversion. Many fans see it as the rightful successor to D&D 3.5, and at some points prior to the release of D&D 5th edition in 2014, Pathfinder actually outsold D&D.
Paizo currently plans to release a second edition of Pathfinder RPG in 2019.
Which is to say, it is effectively the same game in most important regards, but it's not the "name brand version". It has it's own world, it's own pantheon of gods, and many other divergences from D&D 3.5. It is often referred to as D&D 3.75 because it improved on the previous edition of D&D at around the same time that 4th edition came out.
All in all it's much closer to D&D than say Dungeon World or Tunnels & Trolls.
Not the answer you're looking for? Browse other questions tagged pathfinder dungeons-and-dragons history-of-gaming or ask your own question.
How many damage die should my natural attacks be doing?
How can I find old Dungeon and Dragon magazine articles without a subscription?
Where does the elven “trance” come from?
How much money does a god have, and what equipment does he spend it on?
Is Dungeons & Dragons the origin of “Bahamut” as a dragon?
Why are constructs so hard to build?
What RPG was the first to use stats, and what was it inspired by?
What is the origin of deities gaining power from worship?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.