diff --git "a/README.md" "b/README.md"
--- "a/README.md"
+++ "b/README.md"
@@ -189,7 +189,7 @@ model-index:
split: test
metrics:
- type: accuracy
- value: 0.009170806266717615
+ value: 0.6908674054260604
name: Accuracy
---
@@ -221,58 +221,58 @@ The model has been trained using an efficient few-shot learning technique that i
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
-| Label | Examples |
-|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 5 |
- 'its civilizations before the species is able to develop the technology to communicate with other intelligent species intelligent alien species have not developed advanced technologies it may be that while alien species with intelligence exist they are primitive or have not reached the level of technological advancement necessary to communicate along with nonintelligent life such civilizations would also be very difficult to detect a trip using conventional rockets would take hundreds of thousands of years to reach the nearest starsto skeptics the fact that in the history of life on the earth only one species has developed a civilization to the point of being capable of spaceflight and radio technology lends more credence to the idea that technologically advanced civilizations are rare in the universeanother hypothesis in this category is the water world hypothesis according to author and scientist david brin it turns out that our earth skates the very inner edge of our suns continuously habitable — or goldilocks — zone and earth may be anomalous it may be that because we are so close to our sun we have an anomalously oxygenrich atmosphere and we have anomalously little ocean for a water world in other words 32 percent continental mass may be high among water worlds brin continues in which case the evolution of creatures like us with hands and fire and all that sort of thing may be rare in the galaxy in which case when we do build starships and head out there perhaps well find lots and lots of life worlds but theyre all like polynesia well find lots and lots of intelligent lifeforms out there but theyre all dolphins whales squids who could never build their own starships what a perfect universe for us to be in because nobody would be able to boss us around and wed get to be the voyagers the star trek people the starship builders the policemen and so on it is the nature of intelligent life to destroy itself this is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology the astrophysicist sebastian von hoerner stated that the progress of science and technology on earth was driven by two factors — the struggle for domination and the desire for an easy life the former potentially leads to complete destruction while the latter may lead to biological or mental degeneration possible means of annihilation via major global issues where global interconnectedness actually makes humanity more vulnerable than resilient are many including war accidental environmental contamination or damage the development of biotechnology synthetic life like mirror life resource depletion climate change or poorlydesigned artificial intelligence this general theme is explored both in fiction and in'
- '##s in the range 50 to 500 micrometers of average density 20 gcm3 with porosity about 40 the total influx rate of meteoritic sites of most idps captured in the earths stratosphere range between 1 and 3 gcm3 with an average density at about 20 gcm3other specific dust properties in circumstellar dust astronomers have found molecular signatures of co silicon carbide amorphous silicate polycyclic aromatic hydrocarbons water ice and polyformaldehyde among others in the diffuse interstellar medium there is evidence for silicate and carbon grains cometary dust is generally different with overlap from asteroidal dust asteroidal dust resembles carbonaceous chondritic meteorites cometary dust resembles interstellar grains which can include silicates polycyclic aromatic hydrocarbons and water ice in september 2020 evidence was presented of solidstate water in the interstellar medium and particularly of water ice mixed with silicate grains in cosmic dust grains the large grains in interstellar space are probably complex with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds that cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass those cores mostly start with silicate particles condensing in the atmospheres of cool oxygenrich redgiants and carbon grains condensing in the atmospheres of cool carbon stars red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies those refractory cores are also called stardust section above which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars several percent of refractory grain cores have condensed within expanding interiors of supernovae a type of cosmic decompression chamber meteoriticists who study refractory stardust extracted from meteorites often call it presolar grains but that within meteorites is only a small fraction of all presolar dust stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust which accretes cold onto preexisting dust in dark molecular clouds of the galaxy those molecular clouds are very cold typically less than 50k so that ices of many kinds may accrete onto grains in cases only to be destroyed or split apart by'
- '##sequilibrium in the geochemical cycle which would point to a reaction happening more or less often than it should a disequilibrium such as this could be interpreted as an indication of life a biosignature must be able to last for long enough so that a probe telescope or human can be able to detect it a consequence of a biological organisms use of metabolic reactions for energy is the production of metabolic waste in addition the structure of an organism can be preserved as a fossil and we know that some fossils on earth are as old as 35 billion years these byproducts can make excellent biosignatures since they provide direct evidence for life however in order to be a viable biosignature a byproduct must subsequently remain intact so that scientists may discover it a biosignature must be detectable with the current technology to be relevant in scientific investigation this seems to be an obvious statement however there are many scenarios in which life may be present on a planet yet remain undetectable because of humancaused limitations false positives every possible biosignature is associated with its own set of unique false positive mechanisms or nonbiological processes that can mimic the detectable feature of a biosignature an important example is using oxygen as a biosignature on earth the majority of life is centred around oxygen it is a byproduct of photosynthesis and is subsequently used by other life forms to breathe oxygen is also readily detectable in spectra with multiple bands across a relatively wide wavelength range therefore it makes a very good biosignature however finding oxygen alone in a planets atmosphere is not enough to confirm a biosignature because of the falsepositive mechanisms associated with it one possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of noncondensable gasses or if it loses a lot of water finding and distinguishing a biosignature from its potential falsepositive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abioticbiological degeneracy if nature allows false negatives opposite to false positives false negative biosignatures arise in a scenario where life may be present on another planet but some processes on that planet make potential biosignatures undetectable this is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres human limitations there are many ways in which humans may limit the viability'
|
-| 17 | - 'ice began in 1950 with several expeditions using this drilling approach that year the epf drilled holes of 126 m and 151 m at camp vi and station centrale respectively with a rotary rig with no drilling fluid cores were retrieved from both holes a hole 30 m deep was drilled by a oneton plunger which produced a hole 08 m in diameter which allowed a man to be lowered into the hole to study the stratigraphy ractmadoux and reynauds thermal drilling on the mer de glace in 1949 was interrupted by crevasses moraines or air pockets so when the expedition returned to the glacier in 1950 they switched to mechanical drilling with a motordriven rotary drill using an auger as the drillbit and completed a 114 m hole before reaching the bed of the glacier at four separate locations the deepest of which was 284 m — a record depth at that time the augers were similar in form to blumcke and hesss auger from the early part of the century and ractmadoux and reynaud made several modifications to the design over the course of their expedition attempts to switch to different drillbits to penetrate moraine material they encountered were unsuccessful and a new hole was begun instead in these cases as with blumcke and hess an air gap that did not allow the water'
- 'a slightly greener tint than liquid water since absorption is cumulative the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the iceother colors can appear in the presence of light absorbing impurities where the impurity is dictating the color rather than the ice itself for instance icebergs containing impurities eg sediments algae air bubbles can appear brown grey or greenbecause ice in natural environments is usually close to its melting temperature its hardness shows pronounced temperature variations at its melting point ice has a mohs hardness of 2 or less but the hardness increases to about 4 at a temperature of −44 °c −47 °f and to 6 at a temperature of −785 °c −1093 °f the vaporization point of solid carbon dioxide dry ice ice may be any one of the as of 2021 nineteen known solid crystalline phases of water or in an amorphous solid state at various densitiesmost liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together however the strong hydrogen bonds in water make it different for some pressures higher than 1 atm 010 mpa water freezes at a temperature below 0 °c as shown in the phase diagram below the melting of ice under high pressures is thought to contribute to the movement of glaciersice water and water vapour can coexist at the triple point which is exactly 27316 k 001 °c at a pressure of 611657 pa the kelvin was defined as 127316 of the difference between this triple point and absolute zero though this definition changed in may 2019 unlike most other solids ice is difficult to superheat in an experiment ice at −3 °c was superheated to about 17 °c for about 250 picosecondssubjected to higher pressures and varying temperatures ice can form in nineteen separate known crystalline phases with care at least fifteen of these phases one of the known exceptions being ice x can be recovered at ambient pressure and low temperature in metastable form the types are differentiated by their crystalline structure proton ordering and density there are also two metastable phases of ice under pressure both fully hydrogendisordered these are iv and xii ice xii was discovered in 1996 in 2006 xiii and xiv were discovered ices xi xiii and xiv are hydrogenordered forms of ices ih v and xii respectively in 2009 ice xv was found at extremely high pressures and −143 °c at even higher pressures ice is predicted to become a metal this has been variously estimated to occur at 155 tpa or 562 tpaas well as'
- 'borehole has petrophysical measurements made of the wall rocks and these measurements are repeated along the length of the core then the two data sets correlated one will almost universally find that the depth of record for a particular piece of core differs between the two methods of measurement which set of measurements to believe then becomes a matter of policy for the client in an industrial setting or of great controversy in a context without an overriding authority recording that there are discrepancies for whatever reason retains the possibility of correcting an incorrect decision at a later date destroying the incorrect depth data makes it impossible to correct a mistake later any system for retaining and archiving data and core samples needs to be designed so that dissenting opinion like this can be retained if core samples from a campaign are competent it is common practice to slab them – cut the sample into two or more samples longitudinally – quite early in laboratory processing so that one set of samples can be archived early in the analysis sequence as a protection against errors in processing slabbing the core into a 23 and a 13 set is common it is also common for one set to be retained by the main customer while the second set goes to the government who often impose a condition for such donation as a condition of exploration exploitation licensing slabbing also has the benefit of preparing a flat smooth surface for examination and testing of profile permeability which is very much easier to work with than the typically rough curved surface of core samples when theyre fresh from the coring equipment photography of raw and slabbed core surfaces is routine often under both natural and ultraviolet light a unit of length occasionally used in the literature on seabed cores is cmbsf an abbreviation for centimeters below sea floor the technique of coring long predates attempts to drill into the earth ’ s mantle by the deep sea drilling program the value to oceanic and other geologic history of obtaining cores over a wide area of sea floors soon became apparent core sampling by many scientific and exploratory organizations expanded rapidly to date hundreds of thousands of core samples have been collected from floors of all the planets oceans and many of its inland waters access to many of these samples is facilitated by the index to marine lacustrine geological samples coring began as a method of sampling surroundings of ore deposits and oil exploration it soon expanded to oceans lakes ice mud soil and wood cores on very old trees give information about their growth rings without destroying the tree cores indicate variations of climate species and sedimentary composition during geologic history the dynamic phenomena of the earths surface are for the most part cyclical in a number of ways especially temperature'
|
-| 0 | - '##m and henry developed the analogy between electricity and acoustics the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place the first such application was sabines groundbreaking work in architectural acoustics and many others followed underwater acoustics was used for detecting submarines in the first world war sound recording and the telephone played important roles in a global transformation of society sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing the ultrasonic frequency range enabled wholly new kinds of application in medicine and industry new kinds of transducers generators and receivers of acoustic energy were invented and put to use acoustics is defined by ansiasa s112013 as a science of sound including its production transmission and effects including biological and psychological effects b those qualities of a room that together determine its character with respect to auditory effects the study of acoustics revolves around the generation propagation and reception of mechanical waves and vibrations the steps shown in the above diagram can be found in any acoustical event or process there are many kinds of cause both natural and volitional there are many kinds of transduction process that convert energy from some other form into sonic energy producing a sound wave there is one fundamental equation that describes sound wave propagation the acoustic wave equation but the phenomena that emerge from it are varied and often complex the wave carries energy throughout the propagating medium eventually this energy is transduced again into other forms in ways that again may be natural andor volitionally contrived the final effect may be purely physical or it may reach far into the biological or volitional domains the five basic steps are found equally well whether we are talking about an earthquake a submarine using sonar to locate its foe or a band playing in a rock concert the central stage in the acoustical process is wave propagation this falls within the domain of physical acoustics in fluids sound propagates primarily as a pressure wave in solids mechanical waves can take many forms including longitudinal waves transverse waves and surface waves acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment this interaction can be described as either a diffraction interference or a reflection or a mix of the three if several media are present a refraction can also occur transduction processes are also of special importance to acoustics in fluids such as air and water sound waves propagate as disturbances in the ambient pressure level while this disturbance is usually small it is still noticeable to the human ear the smallest sound that a person can hear'
- '##mhzcdot textcmrightcdot ell textcmcdot textftextmhz attenuation is linearly dependent on the medium length and attenuation coefficient as well as – approximately – the frequency of the incident ultrasound beam for biological tissue while for simpler media such as air the relationship is quadratic attenuation coefficients vary widely for different media in biomedical ultrasound imaging however biological materials and water are the most commonly used media the attenuation coefficients of common biological materials at a frequency of 1 mhz are listed below there are two general ways of acoustic energy losses absorption and scattering ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only propagation through heterogeneous media requires taking into account scattering shortwave radiation emitted from the sun have wavelengths in the visible spectrum of light that range from 360 nm violet to 750 nm red when the suns radiation reaches the sea surface the shortwave radiation is attenuated by the water and the intensity of light decreases exponentially with water depth the intensity of light at depth can be calculated using the beerlambert law in clear midocean waters visible light is absorbed most strongly at the longest wavelengths thus red orange and yellow wavelengths are totally absorbed at shallower depths while blue and violet wavelengths reach deeper in the water column because the blue and violet wavelengths are absorbed least compared to the other wavelengths openocean waters appear deep blue to the eye near the shore coastal water contains more phytoplankton than the very clear midocean waters chlorophylla pigments in the phytoplankton absorb light and the plants themselves scatter light making coastal waters less clear than midocean waters chlorophylla absorbs light most strongly in the shortest wavelengths blue and violet of the visible spectrum in coastal waters where high concentrations of phytoplankton occur the green wavelength reaches the deepest in the water column and the color of water appears bluegreen or green the energy with which an earthquake affects a location depends on the running distance the attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking a seismic wave loses energy as it propagates through the earth seismic attenuation this phenomenon is tied into the dispersion of the seismic energy with the distance there are two types of dissipated energy geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat also called intrinsic attenuation or anelastic attenuationin porous fluid — saturated sedimentary'
- 'in acoustics acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium most media have viscosity and are therefore not ideal media when sound propagates in such media there is always thermal consumption of energy caused by viscosity this effect can be quantified through the stokess law of sound attenuation sound attenuation may also be a result of heat conductivity in the media as has been shown by g kirchhoff in 1868 the stokeskirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects for heterogeneous media besides media viscosity acoustic scattering is another main reason for removal of acoustic energy acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields such as medical ultrasonography vibration and noise reduction many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials such as soft tissue polymers soil and porous rock can be expressed as the following power law with respect to frequency p x δ x p x e − α ω δ x α ω α 0 ω η displaystyle pxdelta xpxealpha omega delta xalpha omega alpha 0omega eta where ω displaystyle omega is the angular frequency p the pressure δ x displaystyle delta x the wave propagation distance α ω displaystyle alpha omega the attenuation coefficient and α 0 displaystyle alpha 0 and the frequencydependent exponent η displaystyle eta are real nonnegative material parameters obtained by fitting experimental data the value of η displaystyle eta ranges from 0 to 4 acoustic attenuation in water is frequencysquared dependent namely η 2 displaystyle eta 2 acoustic attenuation in many metals and crystalline materials is frequencyindependent namely η 1 displaystyle eta 1 in contrast it is widely noted that the η displaystyle eta of viscoelastic materials is between 0 and 2 for example the exponent η displaystyle eta of sediment soil and rock is about 1 and the exponent η displaystyle eta of most soft tissues is between 1 and 2the classical dissipative acoustic wave propagation equations are confined to the frequencyindependent and frequencysquared dependent attenuation such as the damped wave equation and the approximate thermoviscous wave equation in recent decades increasing attention and efforts have been focused on developing accurate models to describe general power law frequencydependent acoustic attenuation most of these recent frequencydependent models are established via'
|
-| 15 | - 'native species including the allen cays rock iguana and audubons shearwater since 2008 island conservation and the us fish and wildlife service usfws have worked together to remove invasive vertebrates from desecheo national wildlife refuge in puerto rico primarily benefiting the higo chumbo cactus three endemic reptiles two endemic invertebrates and to recover globally significant seabird colonies of brown boobies red footed boobies and bridled terns future work will focus on important seabird populations key reptile groups including west indian rock iguanas and the restoration of mona island alto velo and offshore cays in the puerto rican bank and the bahamas key partnerships include the usfws puerto rico dner the bahamas national trust and the dominican republic ministry of environment and natural resources in this region island conservation works primarily in ecuador and chile in ecuador the rabida island restoration project was completed in 2010 a gecko phyllodactylus sp found during monitoring in late 2012 was only recorded from subfossils estimated at more than 5700 years old live rabida island endemic land snails bulimulus naesiotus rabidensis not seen since collected over 100 years ago were also collected in late 2012 this was followed in 2012 by the pinzon and plaza sur island restoration project primarily benefiting the pinzon giant tortoise opuntia galapageia galapagos land iguana as a result of the project pinzon giant tortoise hatched from eggs and were surviving in the wild for the first time in more than 150 years in 2019 the directorate of galapagos national park with island conservation used drones to eradicate invasive rats from north seymour island this was the first time such an approach has been used on vertebrates in the wild the expectation is that this innovation will pave the way for cheaper invasive species eradications in the future on small and midsized islands the current focus in ecuador is floreana island with 55 iucn threatened species present and 13 extirpated species that could be reintroduced after invasive mammals are eradicated partners include the leona m and harry b helmsley charitable trust ministry of environment galapagos national park directorate galapagos biosecurity agency the ministry of agriculture the floreana parish council and the galapagos government council in 2009 chile island conservation initiated formal collaborations with conaf the countrys protected areas agency to further restoration of islands under their administration in january 2014 the choros island restoration project was completed benefiting the humboldt penguin peruvian diving petrel and the local ecotourism'
- 'ligase or chloroform extraction of dna may be necessary for electroporation alternatively only use a tenth of the ligation mixture to reduce the amount of contaminants normal preparation of competent cells can yield transformation efficiency ranging from 106 to 108 cfuμg dna protocols for chemical method however exist for making super competent cells that may yield a transformation efficiency of over 1 x 109damage to dna – exposure of dna to uv radiation in standard preparative agarose gel electrophoresis procedure for as little as 45 seconds can damage the dna and this can significantly reduce the transformation efficiency adding cytidine or guanosine to the electrophoresis buffer at 1 mm concentration however may protect the dna from damage a higherwavelength uv radiation 365 nm which cause less damage to dna should be used if it is necessary work for work on the dna on a uv transilluminator for an extended period of time this longer wavelength uv produces weaker fluorescence with the ethidium bromide intercalated into the dna therefore if it is necessary to capture images of the dna bands a shorter wavelength 302 or 312 nm uv radiations may be used such exposure however should be limited to a very short time if the dna is to be recovered later for ligation and transformation the method used for introducing the dna have a significant impact on the transformation efficiency electroporation tends to be more efficient than chemical methods and can be applied to a wide range of species and to strains that were previously resistant and recalcitrant to transformation techniqueselectroporation has been found to have an average yield typically between 104 108 cfuug however a transformation efficiencies as high as 055 x 1010 colony forming units cfu per microgram of dna for e coli for samples that are hard to handle like cdna libraries gdna and plasmids larger than 30 kb it is suggested to use electrocompetent cells that have transformation efficiencies of over 1 x 1010 cfuµg this will ensure a high success rate in introducing the dna and forming a large number of colonies it is important to adjust and optimize the electroporation buffer increasing the concentration of the electroporation buffer can result in increased transformation efficiencies and the shape strength number and number of pulses these electrical parameters play a key role in transformation efficiency chemical transformation or heat shock can be performed in a simple laboratory setup typically yielding transformation efficiencies that are adequate for cloning and subcloning applications approximately 106 cfuµ'
- 'at least one gene that affects isolation such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency in addition interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the selfincompatibility s locus reproductive isolation between species appears in certain cases a long time after fertilization and the formation of the zygote as happens – for example – in the twin species drosophila pavani and d gaucha the hybrids between both species are not sterile in the sense that they produce viable gametes ovules and spermatozoa however they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females be they hybrids or from the parent lines in the same way the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female this type of postcopulatory isolation appears as the most efficient system for maintaining reproductive isolation in many speciesthe development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely and if there is any alteration in the usual process caused by the absence of a necessary gene or the presence of a different one it can arrest the normal development causing the nonviability of the hybrid or its sterility it should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other if the two species are genetically different there is little possibility that the genes from both will act harmoniously in the hybrid from this perspective only a few genes would be required in order to bring about post copulatory isolation as opposed to the situation described previously for precopulatory isolationin many species where precopulatory reproductive isolation does not exist hybrids are produced but they are of only one sex this is the case for the hybridization between females of drosophila simulans and drosophila melanogaster males the hybridized females die early in their development so that only males are seen among the offspring however populations of d simulans have been recorded with genes that permit the development of adult hybrid females that is the viability of the females is rescued it is assumed that the normal activity of these speciation genes is to inhibit the expression of the genes that allow the growth of the hybrid there'
|
-| 29 | - '##gat rises and pressure differences force the saline water from the north sea through the narrow danish straits into the baltic sea throughout the entire inflow process the baltic seas water level rises on average by about 59 cm with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow the mbi itself typically lasts for 7 – 8 days the formation of an mbi requires specific relatively rare weather conditions between 1897 and 1976 approximately 90 mbis were observed averaging about one per year occasionally there are even multiyear periods without any mbis occurring large inflows that effectively renew the deep basin waters occur on average only once every ten yearsvery large mbis have occurred in 1897 330 km3 1906 300 km3 1922 510 km3 1951 510 km3 199394 300 km3 and 20142015 300 km3 large mbis have on the other hand been observed in 1898 twice 1900 1902 twice 1914 1921 1925 1926 1960 1965 1969 1973 1976 and 2003 the mbi that started in 2014 was by far the third largest mbi in the baltic sea only the inflows of 1951 and 19211922 were larger than itpreviously it was believed that there had been a genuine decline in the number of mbis after 1980 but recent studies have changed our understanding of the occurrence of saline inflows especially after the lightship gedser rev discontinued regular salinity measurements in the belt sea in 1976 the picture of the inflows based on salinity measurements remained incomplete at the leibniz institute for baltic sea research warnemunde germany an updated time series has been compiled filling in the gaps in observations and covering major baltic inflows and various smaller inflow events of saline water from around 1890 to the present day the updated time series is based on direct discharge data from the darss sill and no longer shows a clear change in the frequency or intensity of saline inflows instead there is cyclical variation in the intensity of mbis at approximately 30year intervals major baltic inflows mbis are the only natural phenomenon capable of oxygenating the deep saline waters of the baltic sea making their occurrence crucial for the ecological state of the sea the salinity and oxygen from mbis significantly impact the baltic seas ecosystems including the reproductive conditions of marine fish species such as cod the distribution of freshwater and marine species and the overall biodiversity of the baltic seathe heavy saline water brought in by mbis slowly advances along the seabed of the baltic proper at a pace of a few kilometers per day displacing the deep water from one basin to another'
- 'is measured in watts and is given by the solar constant times the crosssectional area of the earth corresponded to the radiation because the surface area of a sphere is four times the crosssectional area of a sphere ie the area of a circle the globally and yearly averaged toa flux is one quarter of the solar constant and so is approximately 340 watts per square meter wm2 since the absorption varies with location as well as with diurnal seasonal and annual variations the numbers quoted are multiyear averages obtained from multiple satellite measurementsof the 340 wm2 of solar radiation received by the earth an average of 77 wm2 is reflected back to space by clouds and the atmosphere and 23 wm2 is reflected by the surface albedo leaving 240 wm2 of solar energy input to the earths energy budget this amount is called the absorbed solar radiation asr it implies a value of about 03 for the mean net albedo of earth also called its bond albedo a a s r 1 − a × 340 w m − 2 [UNK] 240 w m − 2 displaystyle asr1atimes 340mathrm w mathrm m 2simeq 240mathrm w mathrm m 2 thermal energy leaves the planet in the form of outgoing longwave radiation olr longwave radiation is electromagnetic thermal radiation emitted by earths surface and atmosphere longwave radiation is in the infrared band but the terms are not synonymous as infrared radiation can be either shortwave or longwave sunlight contains significant amounts of shortwave infrared radiation a threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation generally absorbed solar energy is converted to different forms of heat energy some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the atmospheric window this radiation is able to pass through the atmosphere unimpeded and directly escape to space contributing to olr the remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms until the atmosphere emits that energy as thermal energy which is able to escape to space again contributing to olr for example heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conductionconvection processes as well as via radiative heat transport ultimately all outgoing energy is radiated into space in the form of longwave radiation the transport of longwave radiation from earths surface through its multilayered atmosphere is governed by radiative transfer equations such as schwarzschilds equation for radiative transfer or more complex equations if scattering is present and'
- 'ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate thus the oceans concentration of carbonate ions is reduced removing an essential building block for marine organisms to build shells or calcify ca2 co2−3 ⇌ caco3the increase in concentrations of dissolved carbon dioxide and bicarbonate and reduction in carbonate are shown in the bjerrum plot the saturation state known as ω of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve and for calcium carbonate is described by the following equation ω ca 2 co 3 2 − k s p displaystyle omega frac leftce ca2rightleftce co32rightksp here ω is the product of the concentrations or activities of the reacting ions that form the mineral ca2 and co32− divided by the apparent solubility product at equilibrium ksp that is when the rates of precipitation and dissolution are equal in seawater dissolution boundary is formed as a result of temperature pressure and depth and is known as the saturation horizon above this saturation horizon ω has a value greater than 1 and caco3 does not readily dissolve most calcifying organisms live in such waters below this depth ω has a value less than 1 and caco3 will dissolve the carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor therefore sediment below this depth will be void of calcium carbonate increasing co2 levels and the resulting lower ph of seawater decreases the concentration of co32− and the saturation state of caco3 therefore increasing caco3 dissolution calcium carbonate most commonly occurs in two common polymorphs crystalline forms aragonite and calcite aragonite is much more soluble than calcite so the aragonite saturation horizon and aragonite compensation depth is always nearer to the surface than the calcite saturation horizon this also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface this decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of caco3 is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states already now large quantities of water undersaturated in aragonite are upwelling close to the pacific continental shelf area of north america from vancouver to northern'
|
-| 28 | - '– 20 pdf acta univ apulensis pp 21 – 38 pdf acta univ apulensis matveev andrey o 2017 farey sequences duality and maps between subsequences berlin de de gruyter isbn 9783110546620 errata code'
- 'a000330 1 2 2 2 [UNK] n 2 1 3 b 0 n 3 3 b 1 n 2 3 b 2 n 1 1 3 n 3 3 2 n 2 1 2 n displaystyle 1222cdots n2frac 13b0n33b1n23b2n1tfrac 13leftn3tfrac 32n2tfrac 12nright some authors use the alternate convention for bernoulli numbers and state bernoullis formula in this way s m n 1 m 1 [UNK] k 0 m − 1 k m 1 k b k − n m 1 − k displaystyle smnfrac 1m1sum k0m1kbinom m1kbknm1k bernoullis formula is sometimes called faulhabers formula after johann faulhaber who also found remarkable ways to calculate sums of powers faulhabers formula was generalized by v guo and j zeng to a qanalog the bernoulli numbers appear in the taylor series expansion of many trigonometric functions and hyperbolic functions the bernoulli numbers appear in the following laurent seriesdigamma function ψ z ln z − [UNK] k 1 ∞ b k k z k displaystyle psi zln zsum k1infty frac bkkzk the kervaire – milnor formula for the order of the cyclic group of diffeomorphism classes of exotic 4n − 1spheres which bound parallelizable manifolds involves bernoulli numbers let esn be the number of such exotic spheres for n ≥ 2 then es n 2 2 n − 2 − 2 4 n − 3 numerator b 4 n 4 n displaystyle textit esn22n224n3operatorname numerator leftfrac b4n4nright the hirzebruch signature theorem for the l genus of a smooth oriented closed manifold of dimension 4n also involves bernoulli numbers the connection of the bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the bernoulli numbers as an instance of a fundamental combinatorial principle the inclusion – exclusion principle the definition to proceed with was developed by julius worpitzky in 1883 besides elementary arithmetic only the factorial function n and the power function km is employed the signless worpitzky numbers are defined as w n k [UNK] v 0 k − 1 v k v 1 n k v k − v displays'
- 'enough to know they exist and have certain properties using the pigeonhole principle thue and later siegel managed to prove the existence of auxiliary functions which for example took the value zero at many different points or took high order zeros at a smaller collection of points moreover they proved it was possible to construct such functions without making the functions too large their auxiliary functions were not explicit functions then but by knowing that a certain function with certain properties existed they used its properties to simplify the transcendence proofs of the nineteenth century and give several new resultsthis method was picked up on and used by several other mathematicians including alexander gelfond and theodor schneider who used it independently to prove the gelfond – schneider theorem alan baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately bakers theorem another example of the use of this method from the 1960s is outlined below let β equal the cube root of ba in the equation ax3 bx3 c and assume m is an integer that satisfies m 1 2n3 ≥ m ≥ 3 where n is a positive integer then there exists f x y p x y ∗ q x displaystyle fxypxyqx such that [UNK] i 0 m n u i x i p x displaystyle sum i0mnuixipx [UNK] i 0 m n v i x i q x displaystyle sum i0mnvixiqx the auxiliary polynomial theorem states max 0 ≤ i ≤ m n u i v i ≤ 2 b 9 m n displaystyle max 0leq ileq mnuivileq 2b9mn in the 1960s serge lang proved a result using this nonexplicit form of auxiliary functions the theorem implies both the hermite – lindemann and gelfond – schneider theorems the theorem deals with a number field k and meromorphic functions f1fn of order at most ρ at least two of which are algebraically independent and such that if we differentiate any of these functions then the result is a polynomial in all of the functions under these hypotheses the theorem states that if there are m distinct complex numbers ω1ωm such that fi ωj is in k for all combinations of i and j then m is bounded by m ≤ 20 ρ k q displaystyle mleq 20rho kmathbb q to prove the result lang took two algebraically independent functions from f1fn say f and g and then created an auxiliary function which was simply a polynomial f in f and g this auxiliary function could'
|
-| 16 | - 'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'
- 'tightly packed array of narrow individual beams provides very high angular resolution and accuracy in general a wide swath which is depth dependent allows a boat to map more seafloor in less time than a singlebeam echosounder by making fewer passes the beams update many times per second typically 01 – 50 hz depending on water depth allowing faster boat speed while maintaining 100 coverage of the seafloor attitude sensors allow for the correction of the boats roll and pitch on the ocean surface and a gyrocompass provides accurate heading information to correct for vessel yaw most modern mbes systems use an integrated motionsensor and position system that measures yaw as well as the other dynamics and position a boatmounted global positioning system gps or other global navigation satellite system gnss positions the soundings with respect to the surface of the earth sound speed profiles speed of sound in water as a function of depth of the water column correct for refraction or raybending of the sound waves owing to nonuniform water column characteristics such as temperature conductivity and pressure a computer system processes all the data correcting for all of the above factors as well as for the angle of each individual beam the resulting sounding measurements are then processed either manually semiautomatically or automatically in limited circumstances to produce a map of the area as of 2010 a number of different outputs are generated including a subset of the original measurements that satisfy some conditions eg most representative likely soundings shallowest in a region etc or integrated digital terrain models dtm eg a regular or irregular grid of points connected into a surface historically selection of measurements was more common in hydrographic applications while dtm construction was used for engineering surveys geology flow modeling etc since c 2003 – 2005 dtms have become more accepted in hydrographic practice satellites are also used to measure bathymetry satellite radar maps deepsea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains ridges and other masses on average sea level is higher over mountains and ridges than over abyssal plains and trenchesin the united states the united states army corps of engineers performs or commissions most surveys of navigable inland waterways while the national oceanic and atmospheric administration noaa performs the same role for ocean waterways coastal bathymetry data is available from noaas national geophysical data center ngdc which is now merged into national centers for environmental information bathymetric data is usually referenced to tidal vertical datums for deepwater bathymetry this is typically mean sea level msl but most data used for nautical charting is referenced to mean lower low water mllw in'
- 'the term stream power law describes a semiempirical family of equations used to predict the rate of erosion of a river into its bed these combine equations describing conservation of water mass and momentum in streams with relations for channel hydraulic geometry widthdischarge scaling and basin hydrology dischargearea scaling and an assumed dependency of erosion rate on either unit stream power or shear stress on the bed to produce a simplified description of erosion rate as a function of power laws of upstream drainage area a and channel slope s e k a m s n displaystyle ekamsn where e is erosion rate and k m and n are positive the value of these parameters depends on the assumptions made but all forms of the law can be expressed in this basic form the parameters k m and n are not necessarily constant but rather may vary as functions of the assumed scaling laws erosion process bedrock erodibility climate sediment flux andor erosion threshold however observations of the hydraulic scaling of real rivers believed to be in erosional steady state indicate that the ratio mn should be around 05 which provides a basic test of the applicability of each formulationalthough consisting of the product of two power laws the term stream power law refers to the derivation of the early forms of the equation from assumptions of erosion dependency on stream power rather than to the presence of power laws in the equation this relation is not a true scientific law but rather a heuristic description of erosion processes based on previously observed scaling relations which may or may not be applicable in any given natural setting the stream power law is an example of a one dimensional advection equation more specifically a hyperbolic partial differential equation typically the equation is used to simulate propagating incision pulses creating discontinuities or knickpoints in the river profile commonly used first order finite difference methods to solve the stream power law may result in significant numerical diffusion which can be prevented by the use of analytical solutions or higher order numerical schemes'
|
-| 40 | - '##regular open set is the set u 01 ∪ 12 in r with its normal topology since 1 is in the interior of the closure of u but not in u the regular open subsets of a space form a complete boolean algebra relatively compact a subset y of a space x is relatively compact in x if the closure of y in x is compact residual if x is a space and a is a subset of x then a is residual in x if the complement of a is meagre in x also called comeagre or comeager resolvable a topological space is called resolvable if it is expressible as the union of two disjoint dense subsets rimcompact a space is rimcompact if it has a base of open sets whose boundaries are compact sspace an sspace is a hereditarily separable space which is not hereditarily lindelofscattered a space x is scattered if every nonempty subset a of x contains a point isolated in ascott the scott topology on a poset is that in which the open sets are those upper sets inaccessible by directed joinssecond category see meagresecondcountable a space is secondcountable or perfectly separable if it has a countable base for its topology every secondcountable space is firstcountable separable and lindelofsemilocally simply connected a space x is semilocally simply connected if for every point x in x there is a neighbourhood u of x such that every loop at x in u is homotopic in x to the constant loop x every simply connected space and every locally simply connected space is semilocally simply connected compare with locally simply connected here the homotopy is allowed to live in x whereas in the definition of locally simply connected the homotopy must live in usemiopen a subset a of a topological space x is called semiopen if a ⊆ cl x int x a displaystyle asubseteq operatorname cl xleftoperatorname int xaright semipreopen a subset a of a topological space x is called semipreopen if a ⊆ cl x int x cl x a displaystyle asubseteq operatorname cl xleftoperatorname int xleftoperatorname cl xarightright semiregular a space is semiregular if the regular open sets form a baseseparable a space is separable if it has a countable dense subsetseparated two sets a and'
- 'not necessarily equivalent the most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that cover the space in the sense that each point of the space lies in some set contained in the family this more subtle notion introduced by pavel alexandrov and pavel urysohn in 1929 exhibits compact spaces as generalizations of finite sets in spaces that are compact in this sense it is often possible to patch together information that holds locally – that is in a neighborhood of each point – into corresponding statements that hold throughout the space and many theorems are of this character the term compact set is sometimes used as a synonym for compact space but also often refers to a compact subspace of a topological space in the 19th century several disparate mathematical properties were understood that would later be seen as consequences of compactness on the one hand bernard bolzano 1817 had been aware that any bounded sequence of points in the line or plane for instance has a subsequence that must eventually get arbitrarily close to some other point called a limit point bolzanos proof relied on the method of bisection the sequence was placed into an interval that was then divided into two equal parts and a part containing infinitely many terms of the sequence was selected the process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point the full significance of bolzanos theorem and its method of proof would not emerge until almost 50 years later when it was rediscovered by karl weierstrassin the 1880s it became clear that results similar to the bolzano – weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points the idea of regarding functions as themselves points of a generalized space dates back to the investigations of giulio ascoli and cesare arzela the culmination of their investigations the arzela – ascoli theorem was a generalization of the bolzano – weierstrass theorem to families of continuous functions the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions the uniform limit of this sequence then played precisely the same role as bolzanos limit point towards the beginning of the twentieth century results similar to that of arzela and ascoli began to accumulate in the area of integral equations as investigated by david hilbert and erhard schmidt for a certain class of greens functions coming from solutions'
- 'also holds for dmodules if x s x and s are smooth varieties but f and g need not be flat or proper etc there is a quasiisomorphism g † [UNK] f f → [UNK] f ′ g ′ † f displaystyle gdagger int fmathcal fto int fgdagger mathcal f where − † displaystyle dagger and [UNK] displaystyle int denote the inverse and direct image functors for dmodules for etale torsion sheaves f displaystyle mathcal f there are two base change results referred to as proper and smooth base change respectively base change holds if f x → s displaystyle fxrightarrow s is proper it also holds if g is smooth provided that f is quasicompact and provided that the torsion of f displaystyle mathcal f is prime to the characteristic of the residue fields of xclosely related to proper base change is the following fact the two theorems are usually proved simultaneously let x be a variety over a separably closed field and f displaystyle mathcal f a constructible sheaf on x et displaystyle xtextet then h r x f displaystyle hrxmathcal f are finite in each of the following cases x is complete or f displaystyle mathcal f has no ptorsion where p is the characteristic of kunder additional assumptions deninger 1988 extended the proper base change theorem to nontorsion etale sheaves in close analogy to the topological situation mentioned above the base change map for an open immersion f g ∗ f ∗ f → f ∗ ′ g ′ ∗ f displaystyle gfmathcal fto fgmathcal f is not usually an isomorphism instead the extension by zero functor f displaystyle f satisfies an isomorphism g ∗ f f → f ′ g ∗ f displaystyle gfmathcal fto fgmathcal f this fact and the proper base change suggest to define the direct image functor with compact support for a map f by r f r p ∗ j displaystyle rfrpj where f p ∘ j displaystyle fpcirc j is a compactification of f ie a factorization into an open immersion followed by a proper map the proper base change theorem is needed to show that this is welldefined ie independent up to isomorphism of the choice of the compactification moreover again in analogy to the case of sheaves on a topological space a base change formula for g ∗ displaystyle g vs r f displaystyle rf does hold for nonproper maps f for the'
|
-| 30 | - 'of mtor inhibitors for the treatment of cancer was not successful at that time since then rapamycin has also shown to be effective for preventing coronary artery restenosis and for the treatment of neurodegenerative diseases the development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus cci779 this novel soluble rapamycin derivative had a favorable toxicological profile in animals more rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer these rapalogs include temsirolimus cci779 everolimus rad001 and ridaforolimus ap23573 which are being evaluated in cancer clinical trials rapamycin analogs have similar therapeutic effects as rapamycin however they have improved hydrophilicity and can be used for oral and intravenous administration in 2012 national cancer institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer typesrapalogs which are the first generation mtor inhibitors have proven effective in a range of preclinical models however the success in clinical trials is limited to only a few rare cancers animal and clinical studies show that rapalogs are primarily cytostatic and therefore effective as disease stabilizers rather than for regression the response rate in solid tumors where rapalogs have been used as a singleagent therapy have been modest due to partial mtor inhibition as mentioned before rapalogs are not sufficient for achieving a broad and robust anticancer effect at least when used as monotherapyanother reason for the limited success is that there is a feedback loop between mtorc1 and akt in certain tumor cells it seems that mtorc1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of akt these limitations have led to the development of the second generation of mtor inhibitors rapamycin and rapalogs rapamycin derivatives are small molecule inhibitors which have been evaluated as anticancer agents the rapalogs have more favorable pharmacokinetic profile compared to rapamycin the parent drug despite the same binding sites for mtor and fkbp12 sirolimus the bacterial natural product rapamycin or sirolimus a cytostatic agent has been used in combination therapy with corticosteroids'
- 'is appropriate typically either a baseline survey or a design survey of functional areas both types of surveys are explained in detail under astm standard e 235604 typically a baseline survey is performed by an epa or state licensed asbestos inspector the baseline survey provides the buyer with sufficient information on presumed asbestos at the facility often which leads to reduction in the assessed value of the building due primarily to forthcoming abatement costs note epa neshap national emissions standards for hazardous air pollutants and osha occupational safety and health administration regulations must be consulted in addition to astm standard e 235604 to ensure all statutory requirements are satisfied ex notification requirements for renovationdemolition asbestos is not a material covered under cercla comprehensive environmental response compensation and liability act innocent purchaser defense in some instances the us epa includes asbestos contaminated facilities on the npl superfund buyers should be careful not to purchase facilities even with an astm e 152705 phase i esa completed without a full understanding of all the hazards in a building or at a property without evaluating nonscope astm e 152705 materials such as asbestos lead pcbs mercury radon et al a standard astm e 152705 does not include asbestos surveys as standard practice in 1988 the united states environmental protection agency usepa issued regulations requiring certain us companies to report the asbestos used in their productsa senate subcommittee of the health education labor and pensions committee heard testimony on july 31 2001 regarding the health effects of asbestos members of the public doctors and scientists called for the united states to join other countries in a ban on the productseveral legislative remedies have been considered by the us congress but each time rejected for a variety of reasons in 2005 congress considered but did not pass legislation entitled the fairness in asbestos injury resolution act of 2005 the act would have established a 140 billion trust fund in lieu of litigation but as it would have proactively taken funds held in reserve by bankruptcy trusts manufacturers and insurance companies it was not widely supported either by victims or corporations on april 26 2005 philip j landrigan professor and chair of the department of community and preventive medicine at mount sinai medical center in new york city testified before the us senate committee on the judiciary against this proposed legislation he testified that many of the bills provisions were unsupported by medicine and would unfairly exclude a large number of people who had become ill or died from asbestos the approach to the diagnosis of disease caused by asbestos that is set forth in this bill is not consistent with the diagnostic criteria established by the american thoracic society if the bill is to deliver on'
- 'cancer slope factors csf are used to estimate the risk of cancer associated with exposure to a carcinogenic or potentially carcinogenic substance a slope factor is an upper bound approximating a 95 confidence limit on the increased cancer risk from a lifetime exposure to an agent by ingestion or inhalation this estimate usually expressed in units of proportion of a population affected per mg of substancekg body weightday is generally reserved for use in the lowdose region of the doseresponse relationship that is for exposures corresponding to risks less than 1 in 100 slope factors are also referred to as cancer potency factors pf for carcinogens it is commonly assumed that a small number of molecular events may evoke changes in a single cell that can lead to uncontrolled cellular proliferation and eventually to a clinical diagnosis of cancer this toxicity of carcinogens is referred to as being nonthreshold because there is believed to be essentially no level of exposure that does not pose some probability of producing a carcinogenic response therefore there is no dose that can be considered to be riskfree however some nongenotoxic carcinogens may exhibit a threshold whereby doses lower than the threshold do not invoke a carcinogenic response when evaluating cancer risks of genotoxic carcinogens theoretically an effect threshold cannot be estimated for chemicals that are carcinogens a twopart evaluation to quantify risk is often employed in which the substance first is assigned a weightofevidence classification and then a slope factor is calculated when the chemical is a known or probable human carcinogen a toxicity value that defines quantitatively the relationship between dose and response ie the slope factor is calculated because risk at low exposure levels is difficult to measure directly either by animal experiments or by epidemiologic studies the development of a slope factor generally entails applying a model to the available data set and using the model to extrapolate from the relatively high doses administered to experimental animals or the exposures noted in epidemiologic studies to the lower exposure levels expected for human contact in the environment highquality human data eg high quality epidemiological studies on carcinogens are preferable to animal data when human data are limited the most sensitive species is given the greatest emphasis occasionally in situations where no single study is judged most appropriate yet several studies collectively support the estimate the geometric mean of estimates from all studies may be adopted as the slope this practice ensures the inclusion of all relevant data slope factors are typically calculated for potential carcinogens in classes a b1'
|
-| 10 | - 'standards for reporting enzymology data strenda is an initiative as part of the minimum information standards which specifically focuses on the development of guidelines for reporting describing metadata enzymology experiments the initiative is supported by the beilstein institute for the advancement of chemical sciences strenda establishes both publication standards for enzyme activity data and strenda db an electronic validation and storage system for enzyme activity data launched in 2004 the foundation of strenda is the result of a detailed analysis of the quality of enzymology data in written and electronic publications the strenda project is driven by 15 scientists from all over the world forming the strenda commission and supporting the work with expertises in biochemistry enzyme nomenclature bioinformatics systems biology modelling mechanistic enzymology and theoretical biology the strenda guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions this minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described this allows scientists not only to review interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways in addition the guidelines support researchers making their experimental data reproducible and transparentas of march 2020 more than 55 international biochemistry journal included the strenda guidelines in their authors instructions as recommendations when reporting enzymology data the strenda project is registered with fairsharingorg and the guidelines are part of the fairdom community standards for systems biology strenda db strenda db is a webbased storage and search platform that has incorporated the guidelines and automatically checks the submitted data on compliance with the strenda guidelines thus ensuring that the manuscript data sets are complete and valid a valid data set is awarded a strenda registry number srn and a fact sheet pdf is created containing all submitted data each dataset is registered at datacite and assigned a doi to refer and track the data after the publication of the manuscript in a peerreviewed journal the data in strenda db are made open accessible strenda db is a repository recommended by re3data and opendoar it is harvested by openaire the database service is recommended in the authors instructions of more than 10 biochemistry journals including nature the journal of biological chemistry elife and plos it has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications a recent study examining eleven publications including supporting information from two leading journals'
- 'an endergonic reaction is an anabolic chemical reaction that consumes energy it is the opposite of an exergonic reaction it has a positive δg because it takes more energy to break the bonds of the reactant than the energy of the products offer ie the products have weaker bonds than the reactants thus endergonic reactions are thermodynamically unfavorable additionally endergonic reactions are usually anabolicthe free energy δg gained or lost in a reaction can be calculated as follows δg δh − tδs where ∆g gibbs free energy ∆h enthalpy t temperature in kelvins and ∆s entropy glycolysis is the process of breaking down glucose into pyruvate producing two molecules of atp per 1 molecule of glucose in the process when a cell has a higher concentration of atp than adp ie has a high energy charge the cell cant undergo glycolysis releasing energy from available glucose to perform biological work pyruvate is one product of glycolysis and can be shuttled into other metabolic pathways gluconeogenesis etc as needed by the cell additionally glycolysis produces reducing equivalents in the form of nadh nicotinamide adenine dinucleotide which will ultimately be used to donate electrons to the electron transport chaingluconeogenesis is the opposite of glycolysis when the cells energy charge is low the concentration of adp is higher than that of atp the cell must synthesize glucose from carbon containing biomolecules such as proteins amino acids fats pyruvate etc for example proteins can be broken down into amino acids and these simpler carbon skeletons are used to build synthesize glucosethe citric acid cycle is a process of cellular respiration in which acetyl coenzyme a synthesized from pyruvate dehydrogenase is first reacted with oxaloacetate to yield citrate the remaining eight reactions produce other carboncontaining metabolites these metabolites are successively oxidized and the free energy of oxidation is conserved in the form of the reduced coenzymes fadh2 and nadh these reduced electron carriers can then be reoxidized when they transfer electrons to the electron transport chainketosis is a metabolic process whereby ketone bodies are used by the cell for energy instead of using glucose cells often turn to ketosis as a source of energy when glucose levels are low eg during starvationoxidative phosphorylation and the electron transport'
- 'the thanatotranscriptome denotes all rna transcripts produced from the portions of the genome still active or awakened in the internal organs of a body following its death it is relevant to the study of the biochemistry microbiology and biophysics of thanatology in particular within forensic science some genes may continue to be expressed in cells for up to 48 hours after death producing new mrna certain genes that are generally inhibited since the end of fetal development may be expressed again at this time clues to the existence of a postmortem transcriptome existed at least since the beginning of the 21st century but the word thanatotranscriptome from thanatos greek for death seems to have been first used in the scientific literature by javan et al in 2015 following the introduction of the concept of the human thanatomicrobiome in 2014 at the 66th annual meeting of the american academy of forensic sciences in seattle washington in 2016 researchers at the university of washington confirmed that up to 2 days 48 hours after the death of mice and zebrafish many genes still functioned changes in the quantities of mrna in the bodies of the dead animals proved that hundreds of genes with very different functions awoke just after death the researchers detected 548 genes that awoke after death in zebrafish and 515 in laboratory mice among these were genes involved in development of the organism including genes that are normally activated only in utero or in ovo in the egg during fetal development the thanatomicrobiome is characterized by a diverse assortment of microorganisms located in internal organs brain heart liver and spleen and blood samples collected after a human dies it is defined as the microbial community of internal body sites created by a successional process whereby trillions of microorganisms populate proliferate andor die within the dead body resulting in temporal modifications in the community composition over time characterization and quantification of the transcriptome in a given dead tissue can identify genetic assets which can be used to determine the regulatory mechanisms and set networks of gene expression the techniques commonly used for simultaneously measuring the concentration of a large number of different types of mrna include microarrays and highthroughput sequencing via rnaseq analysis from a serology postmortem can characterize the transcriptome of a particular tissue cell type or compare the transcriptomes between various experimental conditions such analysis can be complementary to the analysis of thanatomicrobiome to better understand the process of transformation of the necromass in the hours and days following death future applications of this information could include constructing a more'
|
-| 37 | - 'door being closed there is no opposition in this predicate 1b and 1c both have predicates showing transitions of the door going from being implicitly open to closed 1b gives the intransitive use of the verb close with no explicit mention of the causer but 1c makes explicit mention of the agent involved in the action the analysis of these different lexical units had a decisive role in the field of generative linguistics during the 1960s the term generative was proposed by noam chomsky in his book syntactic structures published in 1957 the term generative linguistics was based on chomskys generative grammar a linguistic theory that states systematic sets of rules x theory can predict grammatical phrases within a natural language generative linguistics is also known as governmentbinding theory generative linguists of the 1960s including noam chomsky and ernst von glasersfeld believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization this meant that they saw a simple verb phrase as encompassing a more complex syntactic structure lexicalist theories became popular during the 1980s and emphasized that a words internal structure was a question of morphology and not of syntax lexicalist theories emphasized that complex words resulting from compounding and derivation of affixes have lexical entries that are derived from morphology rather than resulting from overlapping syntactic and phonological properties as generative linguistics predicts the distinction between generative linguistics and lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction generative linguistics theory states the transformation of destroy → destruction as the nominal nom destroy combined with phonological rules that produce the output destruction views this transformation as independent of the morphology lexicalist theory sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology argues that each morpheme contributes specific meaning states that the formation of the complex word destruction is accounted for by a set of lexical rules which are different and independent from syntactic rulesa lexical entry lists the basic properties of either the whole word or the individual properties of the morphemes that make up the word itself the properties of lexical items include their category selection cselection selectional properties sselection also known as semantic selection phonological properties and features the properties of lexical items are idiosyncratic unpredictable and contain specific information about the lexical items that they describethe following is an example of a lexical entry for the verb put lexicalist theories state that a words meaning is'
- 'de se is latin for of oneself and in philosophy it is a phrase used to delineate what some consider a category of ascription distinct from de dicto and de re such ascriptions are found with propositional attitudes mental states an agent holds toward a proposition such de se ascriptions occur when an agent holds a mental state towards a proposition about themselves knowing that this proposition is about themselves a sentence such as peter thinks that he is pale where the pronoun he is meant to refer to peter is ambiguous in a way not captured by the de dicto de re distinction such a sentence could report that peter has the following thought i am pale or peter could have the following thought he is pale where it so happens that the pronoun he refers to peter but peter is unaware of it the first meaning expresses a belief de se while the second does not this notion is extensively discussed in the philosophical literature as well as in the theoretical linguistic literature the latter because some linguistic phenomena clearly are sensitive to this notion david lewiss 1979 article attitudes de dicto and de se gave full birth to the topic and his expression of it draws heavily on his distinctive theory of possible worlds but modern discussions on this topic originate with hectorneri castanedas discovery of what he called quasi indexicals or “ quasiindicators ” according to castaneda the speaker of the sentence “ mary believes that she herself is the winner ” uses the quasiindicator “ she herself ” often written “ she∗ ” to express marys firstperson reference to herself ie to mary that sentence would be the speakers way of depicting the proposition that mary would unambiguously express in the first person by “ i am the winner ” a clearer case can be illustrated simply imagine the following scenario peter who is running for office is drunk he is watching an interview of a candidate on tv not realizing that this candidate is himself liking what he hears he says i hope this candidate gets elected having witnessed this one can truthfully report peters hopes by uttering peter hopes that he will get elected where he refers to peter since this candidate indeed refers to peter however one could not report peters hopes by saying peter hopes to get elected this last sentence is only appropriate if peter had a de se hope that is a hope in the first person as if he had said i hope i get elected which is not the case here the study of the notion of belief de se thus includes that of quasiindexicals the linguistic theory of logophoricity and logophoric pronouns and the linguistic and literary'
- '##mal ie near or closer to the speaker and distal ie far from the speaker andor closer to the addressee english exemplifies this with such pairs as this and that here and there etc in other languages the distinction is threeway or higher proximal ie near the speaker medial ie near the addressee and distal ie far from both this is the case in a few romance languages and in serbocroatian korean japanese thai filipino macedonian yaqui and turkish the archaic english forms yon and yonder still preserved in some regional dialects once represented a distal category that has now been subsumed by the formerly medial there in the sinhala language there is a fourway deixis system for both person and place near the speaker meː near the addressee oː close to a third person visible arəː and far from all not visible eː the malagasy language has seven degrees of distance combined with two degrees of visibility while many inuit languages have even more complex systems temporal deixis temporal deixis or time deixis concerns itself with the various times involved in and referred to in an utterance this includes time adverbs like now then and soon as well as different verbal tenses a further example is the word tomorrow which denotes the next consecutive day after any day it is used tomorrow when spoken on a day last year denoted a different day from tomorrow when spoken next week time adverbs can be relative to the time when an utterance is made what fillmore calls the encoding time or et or the time when the utterance is heard fillmores decoding time or dt although these are frequently the same time they can differ as in the case of prerecorded broadcasts or correspondence for example if one were to write temporal deictical terms are in italics it is raining now but i hope when you read this it will be sunnythe et and dt would be different with now referring to the moment the sentence is written and when referring to the moment the sentence is read tenses are generally separated into absolute deictic and relative tenses so for example simple english past tense is absolute such as in he wentwhereas the pluperfect is relative to some other deictically specified time as in he had gone though the traditional categories of deixis are perhaps the most obvious there are other types of deixis that are similarly pervasive in language use these categories of deixis were first discussed by fillmore and lyons and were echoed in works of others discourse deixis discourse deixis also referred'
|
-| 4 | - 't fractional calculus fractionalorder system multifractal system'
- 'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'
- 'displaystyle k party communication complexity c a k f displaystyle cakf of a function f displaystyle f with respect to partition a displaystyle a is the minimum of costs of those k displaystyle k party protocols which compute f displaystyle f the k displaystyle k party symmetric communication complexity of f displaystyle f is defined as c k f max a c a k f displaystyle ckfmax acakf where the maximum is taken over all kpartitions of set x x 1 x 2 x n displaystyle xx1x2xn for a general upper bound both for two and more players let us suppose that a1 is one of the smallest classes of the partition a1a2ak then p1 can compute any boolean function of s with a1 1 bits of communication p2 writes down the a1 bits of a1 on the blackboard p1 reads it and computes and announces the value f x displaystyle fx so the following can be written c k f ≤ [UNK] n k [UNK] 1 displaystyle ckfleq bigg lfloor n over kbigg rfloor 1 the generalized inner product function gip is defined as follows let y 1 y 2 y k displaystyle y1y2yk be n displaystyle n bit vectors and let y displaystyle y be the n displaystyle n times k displaystyle k matrix with k displaystyle k columns as the y 1 y 2 y k displaystyle y1y2yk vectors then g i p y 1 y 2 y k displaystyle gipy1y2yk is the number of the all1 rows of matrix y displaystyle y taken modulo 2 in other words if the vectors y 1 y 2 y k displaystyle y1y2yk correspond to the characteristic vectors of k displaystyle k subsets of an n displaystyle n element baseset then gip corresponds to the parity of the intersection of these k displaystyle k subsets it was shown that c k g i p ≥ c n 4 k displaystyle ckgipgeq cn over 4k with a constant c 0 an upper bound on the multiparty communication complexity of gip shows that c k g i p ≤ c n 2 k displaystyle ckgipleq cn over 2k with a constant c 0 for a general boolean function f one can bound the multiparty communication complexity of f by using its l1 norm as follows c k f o k 2 log n l 1 f [UNK] n l 1 2 f 2 k [UNK] displaystyle ckfobigg k2log'
|
-| 26 | - 'in physical chemistry and materials science texture is the distribution of crystallographic orientations of a polycrystalline sample it is also part of the geological fabric a sample in which these orientations are fully random is said to have no distinct texture if the crystallographic orientations are not random but have some preferred orientation then the sample has a weak moderate or strong texture the degree is dependent on the percentage of crystals having the preferred orientation texture is seen in almost all engineered materials and can have a great influence on materials properties the texture forms in materials during thermomechanical processes for example during production processes eg rolling consequently the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture controlling the production process in combination with the characterization of texture and the materials microstructure help to determine the materials properties ie the processingmicrostructuretextureproperty relationship also geologic rocks show texture due to their thermomechanic history of formation processes one extreme case is a complete lack of texture a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites the opposite extreme is a perfect single crystal which likely has anisotropic properties by geometric necessity texture can be determined by various methods some methods allow a quantitative analysis of the texture while others are only qualitative among the quantitative techniques the most widely used is xray diffraction using texture goniometers followed by the electron backscatter diffraction ebsd method in scanning electron microscopes qualitative analysis can be done by laue photography simple xray diffraction or with a polarized microscope neutron and synchrotron highenergy xray diffraction are suitable for determining textures of bulk materials and in situ analysis whereas laboratory xray diffraction instruments are more appropriate for analyzing textures of thin films texture is often represented using a pole figure in which a specified crystallographic axis or pole from each of a representative number of crystallites is plotted in a stereographic projection along with directions relevant to the materials processing history these directions define the socalled sample reference frame and are because the investigation of textures started from the cold working of metals usually referred to as the rolling direction rd the transverse direction td and the normal direction nd for drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed see below there are several textures that are commonly found in processed cubic materials they are named either by the scientist that discovered them or by'
- 'are specified according to several standards the most common standard in europe is iso 94541 also known as din en 294541this standard specifies each flux by a fourcharacter code flux type base activator and form the form is often omitted therefore 112 means rosin flux with halides the older german din 8511 specification is still often in use in shops in the table below note that the correspondence between din 8511 and iso 94541 codes is not onetoone one standard increasingly used eg in the united states is jstd004 it is very similar to din en 6119011 four characters two letters then one letter and last a number represent flux composition flux activity and whether activators include halides first two letters base ro rosin re resin or organic in inorganic third letter activity l low m moderate h high number halide content 0 less than 005 in weight “ halidefree ” 1 halide content depends on activity less than 05 for low activity 05 to 20 for moderate activity greater than 20 for high activityany combination is possible eg rol0 rem1 or orh0 jstd004 characterizes the flux by reliability of residue from a surface insulation resistance sir and electromigration standpoint it includes tests for electromigration and surface insulation resistance which must be greater than 100 mω after 168 hours at elevated temperature and humidity with a dc bias applied the old milf14256 and qqs571 standards defined fluxes as r rosin rma rosin mildly activated ra rosin activated ws watersolubleany of these categories may be noclean or not depending on the chemistry selected and the standard that the manufacturer requires fluxcored arc welding gas metal arc welding shielded metal arc welding'
- 'are very soft and ductile the resulting aluminium alloy will have much greater strength adding a small amount of nonmetallic carbon to iron trades its great ductility for the greater strength of an alloy called steel due to its veryhigh strength but still substantial toughness and its ability to be greatly altered by heat treatment steel is one of the most useful and common alloys in modern use by adding chromium to steel its resistance to corrosion can be enhanced creating stainless steel while adding silicon will alter its electrical characteristics producing silicon steel like oil and water a molten metal may not always mix with another element for example pure iron is almost completely insoluble with copper even when the constituents are soluble each will usually have a saturation point beyond which no more of the constituent can be added iron for example can hold a maximum of 667 carbon although the elements of an alloy usually must be soluble in the liquid state they may not always be soluble in the solid state if the metals remain soluble when solid the alloy forms a solid solution becoming a homogeneous structure consisting of identical crystals called a phase if as the mixture cools the constituents become insoluble they may separate to form two or more different types of crystals creating a heterogeneous microstructure of different phases some with more of one constituent than the other however in other alloys the insoluble elements may not separate until after crystallization occurs if cooled very quickly they first crystallize as a homogeneous phase but they are supersaturated with the secondary constituents as time passes the atoms of these supersaturated alloys can separate from the crystal lattice becoming more stable and forming a second phase that serves to reinforce the crystals internally some alloys such as electrum — an alloy of silver and gold — occur naturally meteorites are sometimes made of naturally occurring alloys of iron and nickel but are not native to the earth one of the first alloys made by humans was bronze which is a mixture of the metals tin and copper bronze was an extremely useful alloy to the ancients because it is much stronger and harder than either of its components steel was another common alloy however in ancient times it could only be created as an accidental byproduct from the heating of iron ore in fires smelting during the manufacture of iron other ancient alloys include pewter brass and pig iron in the modern age steel can be created in many forms carbon steel can be made by varying only the carbon content producing soft alloys like mild steel or hard alloys like spring steel alloy steels can be made by adding other elements such as chromium moly'
|
-| 20 | - '##ky to edward said every word in my book is accurate and you cant just simply say its false without documenting it tell me one thing in the book now that is false amy goodman okay lets go to the book the case for israel 10000 on democracy now finkelstein replied to that specific challenge for material errors found in his book overall and dershowitz upped it to 25000 for another particular issue that they disputedfinkelstein referred to concrete facts which are not particularly controversial stating that in the case for israel dershowitz attributes to israeli historian benny morris the figure of between 2000 and 3000 palestinian arabs who fled their homes from april to june 1948 when the range in the figures presented by morris is actually 200000 to 300000dershowitz responded to finkelsteins reply by stating that such a mistake could not have been intentional as it harmed his own side of the debate obviously the phrase 2000 to 3000 arabs refers either to a subphase of the flight or is a typographical error in this particular context dershowitzs argument is that palestinians left as a result of orders issued by palestinian commanders if in fact 200000 were told to leave instead of 2000 that strengthens my argument considerably in his review of beyond chutzpah echoing finkelsteins criticisms michael desch political science professor at university of notre dame observed not only did dershowitz improperly present peterss ideas he may not even have bothered to read the original sources she used to come up with them finkelstein somehow managed to get uncorrected page proofs of the case for israel in which dershowitz appears to direct his research assistant to go to certain pages and notes in peterss book and place them in his footnotes directly 32 col 3 oxford academic avi shlaim had also been critical of dershowitz saying he believed that the charge of plagiarism is proved in a manner that would stand up in courtin deschs review of beyond chutzpah summarizing finkelsteins case against dershowitz for torturing the evidence particularly finkelsteins argument relating to dershowitzs citations of morris desch observed there are two problems with dershowitzs heavy reliance on morris the first is that morris is hardly the leftwing peacenik that dershowitz makes him out to be which means that calling him as a witness in israels defense is not very helpful to the case the more important problem is that many of the points dershowitz cites morris as supporting — that the early zionists wanted peaceful coexi'
- 'sees it as a steady evolution of british parliamentary institutions benevolently watched over by whig aristocrats and steadily spreading social progress and prosperity it described a continuity of institutions and practices since anglosaxon times that lent to english history a special pedigree one that instilled a distinctive temper in the english nation as whigs liked to call it and an approach to the world which issued in law and lent legal precedent a role in preserving or extending the freedoms of englishmenpaul rapin de thoyrass history of england published in 1723 became the classic whig history for the first half of the eighteenth century rapin claimed that the english had preserved their ancient constitution against the absolutist tendencies of the stuarts however rapins history lost its place as the standard history of england in the late 18th century and early 19th century to that of david humewilliam blackstones commentaries on the laws of england 1765 – 1769 reveals many whiggish traitsaccording to arthur marwick however henry hallam was the first whig historian publishing constitutional history of england in 1827 which greatly exaggerated the importance of parliaments or of bodies whig historians thought were parliaments while tending to interpret all political struggles in terms of the parliamentary situation in britain during the nineteenth century in terms that is of whig reformers fighting the good fight against tory defenders of the status quo in the history of england 1754 – 1761 hume challenged whig views of the past and the whig historians in turn attacked hume but they could not dent his history in the early 19th century some whig historians came to incorporate humes views dominant for the previous fifty years these historians were members of the new whigs around charles james fox 1749 – 1806 and lord holland 1773 – 1840 in opposition until 1830 and so needed a new historical philosophy fox himself intended to write a history of the glorious revolution of 1688 but only managed the first year of james iis reign a fragment was published in 1808 james mackintosh then sought to write a whig history of the glorious revolution published in 1834 as the history of the revolution in england in 1688 hume still dominated english historiography but this changed when thomas babington macaulay entered the field utilising fox and mackintoshs work and manuscript collections macaulays history of england was published in a series of volumes from 1848 to 1855 it proved an immediate success replacing humes history and becoming the new orthodoxy as if to introduce a linear progressive view of history the first chapter of macaulays history of england proposes the history of our country during the last hundred and sixty years is eminently the history of physical'
- 'the long nineteenth century is a term for the 125year period beginning with the onset of the french revolution in 1789 and ending with the outbreak of world war i in 1914 it was coined by russian writer ilya ehrenburg and later popularized by british marxist historian eric hobsbawm the term refers to the notion that the period reflects a progression of ideas which are characteristic to an understanding of the 19th century in europe the concept is an adaption of fernand braudels 1949 notion of le long seizieme siecle the long 16th century 1450 – 1640 and a recognized category of literary history although a period often broadly and diversely defined by different scholars numerous authors before and after hobsbawms 1995 publication have applied similar forms of book titles or descriptions to indicate a selective time frame for their works such as s ketterings french society 1589 – 1715 – the long seventeenth century e anthony wrigleys british population during the long eighteenth century 1680 – 1840 or d blackbourns the long nineteenth century a history of germany 1780 – 1918 however the term has been used in support of historical publications to connect with broader audiences and is regularly cited in studies and discussions across academic disciplines such as history linguistics and the arts hobsbawm lays out his analysis in the age of revolution europe 1789 – 1848 1962 the age of capital 1848 – 1875 1975 and the age of empire 1875 – 1914 1987 hobsbawm starts his long 19th century with the french revolution which sought to establish universal and egalitarian citizenship in france and ends it with the outbreak of world war i upon the conclusion of which in 1918 the longenduring european power balance of the 19th century proper 1801 – 1900 was eliminated in a sequel to the abovementioned trilogy the age of extremes the short twentieth century 1914 – 1991 1994 hobsbawm details the short 20th century a concept originally proposed by ivan t berend beginning with world war i and ending with the fall of the soviet union between 1914 – 1991a more generalized version of the long 19th century lasting from 1750 to 1914 is often used by peter n stearns in the context of the world history school in religious contexts specifically those concerning the history of the catholic church the long 19th century was a period of centralization of papal power over the catholic church this centralization was in opposition to the increasingly centralized nation states and contemporary revolutionary movements and used many of the same organizational and communication techniques as its rivals the churchs long 19th century extended from the french revolution 1789 until the death of pope pius xii 1958 this covers'
|
-| 13 | - 'of group musicmaking through the long development of the republic system developed and employed by members of the network band powerbooks unplugged republic is built into the supercollider language and allows participants to collaboratively write live code that is distributed across the network of computers there are similar efforts in other languages such as the distributed tuple space used in the impromptu language additionally overtone impromptu and extempore support multiuser sessions in which any number of programmers can intervene across the network in a given runtime process the practice of writing code in group can be done in the same room through a local network or from remote places accessing a common server terms like laptop band laptop orchestra collaborative live coding or collective live coding are used to frame a networked live coding practice both in a local or remote way toplap the temporarytransnationalterrestrialtransdimensional organisation for the promotionproliferationpermanencepurity of live algorithmaudioartartistic programming is an informal organization formed in february 2004 to bring together the various communities that had formed around live coding environments the toplap manifesto asserts several requirements for a toplap compliant performance in particular that performers screens should be projected and not hiddenonthefly promotes live coding practice since 2020 this is a project cofunded by the creative european program and run in hangar zkm ljudmila and creative code utrecht a number of research projects and research groups have been created to explore live coding often taking interdisciplinary approaches bridging the humanities and sciences first efforts to both develop live coding systems and embed the emerging field in the broader theoretical context happened in the research project artistic interactivity in hybrid networks from 2005 to 2008 funded by the german research foundationfurther the live coding research network was funded by the uk arts and humanities research council for two years from february 2014 supporting a range of activities including symposia workshops and an annual international conference called international conference on live coding iclc algorave — event where music andor visuals are generated from algorithms generally live coded demoscene — subculture around coding audiovisual presentations demos exploratory programming — the practice of building software as a way to understand its requirements and structure interactive programming — programming practice of using live coding in software development nime — academic and artistic conference on advances in music technology sometimes featuring live coding performances and research presentations andrews robert “ real djs code live ” wired online 7 march 2006 brown andrew r “ code jamming ” mc journal 96 december 2006 magnusson thor herding cats observing live coding in the wild computer music journal'
- '##y the 1960s produced a strain of cybernetic art that was very much concerned with the shared circuits within and between the living and the technological a line of cybernetic art theory also emerged during the late 1960s writers like jonathan benthall and gene youngblood drew on cybernetics and cybernetic the most substantial contributors here were the british artist and theorist roy ascott with his essay behaviourist art and the cybernetic vision in the journal cybernetica 1966 – 67 and the american critic and theorist jack burnham in beyond modern sculpture from 1968 burnham builds cybernetic art into an extensive theory that centers on arts drive to imitate and ultimately reproduce life also in 1968 curator jasia reichardt organized the landmark exhibition cybernetic serendipity at the institute of contemporary art in london generative art is art that has been generated composed or constructed in an algorithmic manner through the use of systems defined by computer software algorithms or similar mathematical or mechanical or randomised autonomous processes sonia landy sheridan established generative systems as a program at the school of the art institute of chicago in 1970 in response to social change brought about in part by the computerrobot communications revolution the program which brought artists and scientists together was an effort at turning the artists passive role into an active one by promoting the investigation of contemporary scientific — technological systems and their relationship to art and life unlike copier art which was a simple commercial spinoff generative systems was actually involved in the development of elegant yet simple systems intended for creative use by the general population generative systems artists attempted to bridge the gap between elite and novice by directing the line of communication between the two thus bringing first generation information to greater numbers of people and bypassing the entrepreneur process art is an artistic movement as well as a creative sentiment and world view where the end product of art and craft the objet d ’ art is not the principal focus the process in process art refers to the process of the formation of art the gathering sorting collating associating and patterning process art is concerned with the actual doing art as a rite ritual and performance process art often entails an inherent motivation rationale and intentionality therefore art is viewed as a creative journey or process rather than as a deliverable or end product in the artistic discourse the work of jackson pollock is hailed as an antecedent process art in its employment of serendipity has a marked correspondence with dada change and transience are marked themes in the process art movement the guggenheim museum states that robert morris in 1968 had a groundbreaking exhibition and essay defining the movement and'
- 'music visualization or music visualisation a feature found in electronic music visualizers and media player software generates animated imagery based on a piece of music the imagery is usually generated and rendered in real time and in a way synchronized with the music as it is played visualization techniques range from simple ones eg a simulation of an oscilloscope display to elaborate ones which often include a number of composited effects the changes in the musics loudness and frequency spectrum are among the properties used as input to the visualization effective music visualization aims to attain a high degree of visual correlation between a musical tracks spectral characteristics such as frequency and amplitude and the objects or components of the visual image being rendered and displayed music visualization can be defined in contrast to previous existing pregenerated music plus visualization combinations as for example music videos by its characteristic as being realtime generated another possible distinction is seen by some in the ability of some music visualization systems such as geiss milkdrop to create different visualizations for each song or audio every time the program is run in contrast to other forms of music visualization such as music videos or a laser lighting display which always show the same visualization music visualization may be achieved in a 2d or a 3d coordinate system where up to six dimensions can be modified the 4th 5th and 6th dimensions being color intensity and transparency the first electronic music visualizer was the atari video music introduced by atari inc in 1976 and designed by the initiator of the home version of pong robert brown the idea was to create a visual exploration that could be implemented into a hifi stereo system in the united kingdom music visualization was first pioneered by fred judd music and audio players were available on early home computers sound to light generator 1985 infinite software used the zx spectrums cassette player for example the 1984 movie electric dreams prominently made use of one although as a pregenerated effect rather than calculated in realtime for pcdos one of the first modern music visualization programs was the opensource multiplatform cthugha in 1993 in the 1990s the emerging demo and tracker music scene pioneered the realtime technics for music visualization on the pc platform resulting examples are cubic player 1994 inertia player 1995 or in general their realtime generated demossubsequently pc computer music visualization became widespread in the mid to late 1990s as applications such as winamp 1997 audion 1999 and soundjam 2000 by 1999 there were several dozen freeware nontrivial music visualizers in distribution in particular milkdrop 2001 and its predecessor ge'
|
-| 33 | - 'a psychic detective is a person who investigates crimes by using purported paranormal psychic abilities examples have included postcognition the paranormal perception of the past psychometry information psychically gained from objects telepathy dowsing clairvoyance and remote viewing in murder cases psychic detectives may purport to be in communication with the spirits of the murder victims individuals claiming psychic abilities have stated they have helped police departments to solve crimes however there is a lack of police corroboration of their claims many police departments around the world have released official statements saying that they do not regard psychics as credible or useful on cases many prominent police cases often involving missing persons have received the attention of alleged psychics in november 2004 purported psychic sylvia browne told the mother of kidnapping victim amanda berry who had disappeared 19 months earlier shes not alive honey browne also claimed to have had a vision of berrys jacket in the garbage with dna on it berrys mother died two years later believing that her daughter had been killed berry was found alive in may 2013 having been a kidnapping victim of ariel castro along with michelle knight and gina dejesus after berry was found alive browne received criticism for the false declaration that berry was dead browne also became involved in the case of shawn hornbeck which received the attention of psychics after the elevenyearold went missing on 6 october 2002 browne appeared on the montel williams show and provided the parents of shawn hornbeck a detailed description of the abductor and where hornbeck could be found browne responded no when asked if he was still alive when hornbeck was found alive more than four years later few of the details given by browne were correct shawn hornbecks father craig akers has stated that brownes declaration was one of the hardest things that weve ever had to hear and that her misinformation diverted investigators wasting precious police timewhen washington dc intern chandra levy went missing on 1 may 2001 psychics from around the world provided tips suggesting that her body would be found in places such as the basement of a smithsonian storage building in the potomac river and buried in the nevada desert among many other possible locations each tip led nowhere a little more than a year after her disappearance levys body was accidentally discovered by a man walking his dog in a remote section of rock creek parkfollowing the disappearance of elizabeth smart on 5 june 2002 the police received as many as 9000 tips from psychics and others crediting visions and dreams as their source responding to these tips took many police hours according to salt lake city police chief lieutenant chris burbank yet elizabeth smarts father ed'
- 'telepathy and communication with the dead were impossible and that the mind of man cannot be read through telepathy but only by muscle reading in the late 19th century the creery sisters mary alice maud kathleen and emily were tested by the society for psychical research and believed to have genuine psychic ability however during a later experiment they were caught utilizing signal codes and they confessed to fraud george albert smith and douglas blackburn were claimed to be genuine psychics by the society for psychical research but blackburn confessed to fraud for nearly thirty years the telepathic experiments conducted by mr g a smith and myself have been accepted and cited as the basic evidence of the truth of thought transference the whole of those alleged experiments were bogus and originated in the honest desire of two youths to show how easily men of scientific mind and training could be deceived when seeking for evidence in support of a theory they were wishful to establish between 1916 and 1924 gilbert murray conducted 236 experiments into telepathy and reported 36 as successful however it was suggested that the results could be explained by hyperaesthesia as he could hear what was being said by the sender psychologist leonard t troland had carried out experiments in telepathy at harvard university which were reported in 1917 the subjects produced below chance expectationsarthur conan doyle and w t stead were duped into believing julius and agnes zancig had genuine psychic powers both doyle and stead wrote that zancigs performed telepathy in 1924 julius and agnes zancig confessed that their mind reading act was a trick and published the secret code and all the details of the trick method they had used under the title of our secrets in a london newspaperin 1924 robert h gault of northwestern university with gardner murphy conducted the first american radio test for telepathy the results were entirely negative one of their experiments involved the attempted thought transmission of a chosen number between one and onethousand out of 2010 replies none was correct this is below the theoretical chance figure of two correct replies in such a situationin february 1927 with the cooperation of the british broadcasting corporation bbc v j woolley who was at the time the research officer for the spr arranged a telepathy experiment in which radio listeners were asked to take part the experiment involved agents thinking about five selected objects in an office at tavistock square whilst listeners on the radio were asked to identify the objects from the bbc studio at savoy hill 24659 answers were received the results revealed no evidence of telepathya famous experiment in telepathy was recorded by the american author upton sinclair'
- 'bars by telekinesis he was tested in the 1970s but failed to produce any paranormal effects in scientifically controlled conditions he was tested on january 19 1977 during a twohour experiment in a paris laboratory directed by physicist yves farge a magician was also present girard failed to make any objects move paranormally he failed two tests in grenoble in june 1977 with magician james randi he was also tested on september 24 1977 at a laboratory at the nuclear research centre and failed to bend any bars or change the metals structure other experiments into spoonbending were also negative and witnesses described his feats as fraudulent girard later admitted he sometimes cheated to avoid disappointing the public but insisted he had genuine psychic power magicians and scientists have written that he produced all his alleged telekinetic feats through fraudulent meansstephen north a british psychic in the late 1970s was known for his alleged telekinetic ability to bend spoons and teleport objects in and out of sealed containers british physicist john hasted tested north in a series of experiments which he claimed had demonstrated telekinesis though his experiments were criticized for lack of scientific controls north was tested in grenoble on december 19 1977 in scientific conditions and the results were negative according to james randi during a test at birkbeck college north was observed to have bent a metal sample with his bare hands randi wrote i find it unfortunate that hasted never had an epiphany in which he was able to recognize just how thoughtless cruel and predatory were the acts perpetrated on him by fakers who took advantage of his naivety and trusttelekinesis parties were a cultural fad in the 1980s begun by jack houck where groups of people were guided through rituals and chants to awaken metalbending powers they were encouraged to shout at the items of cutlery they had brought and to jump and scream to create an atmosphere of pandemonium or what scientific investigators called heightened suggestibility critics were excluded and participants were told to avoid looking at their hands thousands of people attended these emotionally charged parties and many were convinced they had bent the objects by paranormal means 149 – 161 telekinesis parties have been described as a campaign by paranormal believers to convince people of the existence of telekinesis on the basis of nonscientific data from personal experience and testimony the united states national academy of sciences has criticized telekinesis parties on the grounds that conditions are not reliable for obtaining scientific results and are just those which psychologists and others have described as creating states of heightened suggest'
|
-| 7 | - 'an audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer the y axis represents intensity measured in decibels db and the x axis represents frequency measured in hertz hz the threshold of hearing is plotted relative to a standardised curve that represents normal hearing in dbhl they are not the same as equalloudness contours which are a set of curves representing equal loudness at different levels as well as at the threshold of hearing in absolute terms measured in db spl sound pressure level the frequencies displayed on the audiogram are octaves which represent a doubling in frequency eg 250 hz 500 hz 1000 hz wtc commonly tested interoctave frequenices eg 3000 hz may also be displayed the intensities displayed on the audiogram appear as linear 10 dbhl steps however decibels are a logarithimic scale so that successive 10 db increments represent greater increases in loudness for humans normal hearing is between −10 dbhl and 15 dbhl although 0 db from 250 hz to 8 khz is deemed to be average normal hearing hearing thresholds of humans and other mammals can be found with behavioural hearing tests or physiological tests used in audiometry for adults a behavioural hearing test involves a tester who presents tones at specific frequencies pitches and intensities loudnesses when the testee hears the sound he or she responds eg by raising a hand or pressing a button the tester records the lowest intensity sound the testee can hear with children an audiologist makes a game out of the hearing test by replacing the feedback device with activityrelated toys such as blocks or pegs this is referred to as conditioned play audiometry visual reinforcement audiometry is also used with children when the child hears the sound he or she looks in the direction the sound came from and are reinforced with a light andor animated toy a similar technique can be used when testing some animals but instead of a toy food can be used as a reward for responding to the sound physiological tests do not need the patient to respond katz 2002 for example when performing the brainstem auditory evoked potentials the patients brainstem responses are being measured when a sound is played into their ear or otoacoustic emissions which are generated by a healthy inner ear either spontaneously or evoked by an outside stimulus in the us the niosh recommends that people who are regularly exposed to hazardous noise have their hearing tested once a year or every three years otherwise audiograms are produced using a piece of test equipment called an audiometer and this'
- '##platinin addition to medications hearing loss can also result from specific chemicals in the environment metals such as lead solvents such as toluene found in crude oil gasoline and automobile exhaust for example and asphyxiants combined with noise these ototoxic chemicals have an additive effect on a persons hearing loss hearing loss due to chemicals starts in the high frequency range and is irreversible it damages the cochlea with lesions and degrades central portions of the auditory system for some ototoxic chemical exposures particularly styrene the risk of hearing loss can be higher than being exposed to noise alone the effects is greatest when the combined exposure include impulse noise a 2018 informational bulletin by the us occupational safety and health administration osha and the national institute for occupational safety and health niosh introduces the issue provides examples of ototoxic chemicals lists the industries and occupations at risk and provides prevention informationthere can be damage either to the ear whether the external or middle ear to the cochlea or to the brain centers that process the aural information conveyed by the ears damage to the middle ear may include fracture and discontinuity of the ossicular chain damage to the inner ear cochlea may be caused by temporal bone fracture people who sustain head injury are especially vulnerable to hearing loss or tinnitus either temporary or permanent sound waves reach the outer ear and are conducted down the ear canal to the eardrum causing it to vibrate the vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear the fluid moves hair cells stereocilia and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve the auditory nerve takes the impulses to the brainstem which sends the impulses to the midbrain finally the signal goes to the auditory cortex of the temporal lobe to be interpreted as soundhearing loss is most commonly caused by longterm exposure to loud noises from recreation or from work that damage the hair cells which do not grow back on their ownolder people may lose their hearing from long exposure to noise changes in the inner ear changes in the middle ear or from changes along the nerves from the ear to the brain identification of a hearing loss is usually conducted by a general practitioner medical doctor otolaryngologist certified and licensed audiologist school or industrial audiometrist or other audiometric technician diagnosis of the cause of a hearing loss is carried out by a specialist physician audiovestibular physician or otorhinolaryngologist hearing loss'
- '##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'
|
-| 3 | - '##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'
- 'turn killed by his relations and friends the moment a grey hair appears on his head all the noble savages wars with his fellowsavages and he takes no pleasure in anything else are wars of extermination — which is the best thing i know of him and the most comfortable to my mind when i look at him he has no moral feelings of any kind sort or description and his mission may be summed up as simply diabolical dickens ends his cultural criticism by reiterating his argument against the romanticized persona of the noble savage to conclude as i began my position is that if we have anything to learn from the noble savage it is what to avoid his virtues are a fable his happiness is a delusion his nobility nonsense we have no greater justification for being cruel to the miserable object than for being cruel to a william shakespeare or an isaac newton but he passes away before an immeasurably better and higher power than ever ran wild in any earthly woods and the world will be all the better when this place earth knows him no more in 1860 the physician john crawfurd and the anthropologist james hunt identified the racial stereotype of the noble savage as an example of scientific racism yet as advocates of polygenism — that each race is a distinct species of man — crawfurd and hunt dismissed the arguments of their opponents by accusing them of being proponents of rousseaus noble savage later in his career crawfurd reintroduced the noble savage term to modern anthropology and deliberately ascribed coinage of the term to jeanjacques rousseau in war before civilization the myth of the peaceful savage 1996 the archaeologist lawrence h keeley said that the widespread myth that civilized humans have fallen from grace from a simple primeval happiness a peaceful golden age is contradicted and refuted by archeologic evidence that indicates that violence was common practice in early human societies that the noble savage paradigm has warped anthropological literature to political ends moreover the anthropologist roger sandall likewise accused anthropologists of exalting the noble savage above civilized man by way of designer tribalism a form of romanticised primitivism that dehumanises indigenous peoples into the cultural stereotype of the indigene peoples who live a primitive way of life demarcated and limited by tradition which discouraged indigenous peoples from cultural assimilation into the dominant western culture in the prehistory of warfare misled by ethnography 2006 the researchers jonathan haas and matthew piscitelli challenged the idea that the human species is innately bellicose and that warfare is an occasional act'
- 'head a small terracotta sculpture of a head with a beard and europeanlike features was found in 1933 in the toluca valley 72 kilometres 45 mi southwest of mexico city in a burial offering under three intact floors of a precolonial building dated to between 1476 and 1510 the artifact has been studied by roman art authority bernard andreae director emeritus of the german institute of archaeology in rome italy and austrian anthropologist robert von heinegeldern both of whom stated that the style of the artifact was compatible with small roman sculptures of the 2nd century if genuine and if not placed there after 1492 the pottery found with it dates to between 1476 and 1510 the find provides evidence for at least a onetime contact between the old and new worldsaccording to arizona state universitys michael e smith a leading mesoamerican scholar named john paddock used to tell his classes in the years before he died that the artifact was planted as a joke by hugo moedano a student who originally worked on the site despite speaking with individuals who knew the original discoverer garcia payon and moedano smith says he has been unable to confirm or reject this claim though he remains skeptical smith concedes he cannot rule out the possibility that the head was a genuinely buried postclassic offering at calixtlahuaca henry i sinclair earl of orkney and feudal baron of roslin c 1345 – c 1400 was a scottish nobleman who is best known today from a modern legend which claims that he took part in explorations of greenland and north america almost 100 years before christopher columbuss voyages to the americas in 1784 he was identified by johann reinhold forster as possibly being the prince zichmni who is described in letters which were allegedly written around 1400 by the zeno brothers of venice in which they describe a voyage which they made throughout the north atlantic under the command of zichmni according to the dictionary of canadian biography online the zeno affair remains one of the most preposterous and at the same time one of the most successful fabrications in the history of explorationhenry was the grandfather of william sinclair 1st earl of caithness the builder of rosslyn chapel near edinburgh scotland the authors robert lomas and christopher knight believe some carvings in the chapel were intended to represent ears of new world corn or maize a crop unknown in europe at the time of the chapels construction knight and lomas view these carvings as evidence supporting the idea that henry sinclair traveled to the americas well before columbus in their book they discuss meeting with the wife of the botanist'
|
-| 21 | - '##lenishes nitrogen and other critical nutrients cover crops also help to suppress weeds soilconservation farming involves notill farming green manures and other soilenhancing practices which make it hard for the soils to be equalized such farming methods attempt to mimic the biology of barren lands they can revive damaged soil minimize erosion encourage plant growth eliminate the use of nitrogen fertilizer or fungicide produce aboveaverage yields and protect crops during droughts or flooding the result is less labor and lower costs that increase farmers ’ profits notill farming and cover crops act as sinks for nitrogen and other nutrients this increases the amount of soil organic matterrepeated plowingtilling degrades soil killing its beneficial fungi and earthworms once damaged soil may take multiple seasons to fully recover even in optimal circumstancescritics argue that notill and related methods are impractical and too expensive for many growers partly because it requires new equipment they cite advantages for conventional tilling depending on the geography crops and soil conditions some farmers have contended that notill complicates pest control delays planting and that postharvest residues especially for corn are hard to manage the use of pesticides can contaminate the soil and nearby vegetation and water sources for a long time they affect soil structure and biotic and abiotic composition differentiated taxation schemes are among the options investigated in the academic literature to reducing their use salinity in soil is caused by irrigating with salty water water then evaporates from the soil leaving the salt behind salt breaks down the soil structure causing infertility and reduced growththe ions responsible for salination are sodium na potassium k calcium ca2 magnesium mg2 and chlorine cl− salinity is estimated to affect about one third of the earths arable land soil salinity adversely affects crop metabolism and erosion usually follows salinity occurs on drylands from overirrigation and in areas with shallow saline water tables overirrigation deposits salts in upper soil layers as a byproduct of soil infiltration irrigation merely increases the rate of salt deposition the bestknown case of shallow saline water table capillary action occurred in egypt after the 1970 construction of the aswan dam the change in the groundwater level led to high salt concentrations in the water table the continuous high level of the water table led to soil salination use of humic acids may prevent excess salination especially given excessive irrigation humic acids can fix both anions and cations and eliminate them from root zonesplanting species that can tolerate'
- 'in agriculture postharvest handling is the stage of crop production immediately following harvest including cooling cleaning sorting and packing the instant a crop is removed from the ground or separated from its parent plant it begins to deteriorate postharvest treatment largely determines final quality whether a crop is sold for fresh consumption or used as an ingredient in a processed food product the most important goals of postharvest handling are keeping the product cool to avoid moisture loss and slow down undesirable chemical changes and avoiding physical damage such as bruising to delay spoilage sanitation is also an important factor to reduce the possibility of pathogens that could be carried by fresh produce for example as residue from contaminated washing water after the field postharvest processing is usually continued in a packing house this can be a simple shed providing shade and running water or a largescale sophisticated mechanised facility with conveyor belts automated sorting and packing stations walkin coolers and the like in mechanised harvesting processing may also begin as part of the actual harvest process with initial cleaning and sorting performed by the harvesting machinery initial postharvest storage conditions are critical to maintaining quality each crop has an optimum range of storage temperature and humidity also certain crops cannot be effectively stored together as unwanted chemical interactions can result various methods of highspeed cooling and sophisticated refrigerated and atmospherecontrolled environments are employed to prolong freshness particularly in largescale operations once harvested vegetables and fruits are subject to the active process of degradation numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable the period during which consumption is considered acceptable is defined as the time of postharvest shelf lifepostharvest shelf life is typically determined by objective methods that determine the overall appearance taste flavor and texture of the commodity these methods usually include a combination of sensorial biochemical mechanical and colorimetric optical measurements a recent study attempted and failed to discover a biochemical marker and fingerprint methods as indices for freshness postharvest physiology is the scientific study of the plant physiology of living plant tissues after picking it has direct applications to postharvest handling in establishing the storage and transport conditions that best prolong shelf life an example of the importance of the field to postharvest handling is the discovery that ripening of fruit can be delayed and thus their storage prolonged by preventing fruit tissue respiration this insight allowed scientists to bring to bear their knowledge of the fundamental principles and mechanisms of respiration leading to postharvest storage techniques such as cold storage gaseous storage and'
- 'cultivated plant taxonomy is the study of the theory and practice of the science that identifies describes classifies and names cultigens — those plants whose origin or selection is primarily due to intentional human activity cultivated plant taxonomists do however work with all kinds of plants in cultivation cultivated plant taxonomy is one part of the study of horticultural botany which is mostly carried out in botanical gardens large nurseries universities or government departments areas of special interest for the cultivated plant taxonomist include searching for and recording new plants suitable for cultivation plant hunting communicating with and advising the general public on matters concerning the classification and nomenclature of cultivated plants and carrying out original research on these topics describing the cultivated plants of particular regions horticultural floras maintaining databases herbaria and other information about cultivated plants much of the work of the cultivated plant taxonomist is concerned with the naming of plants as prescribed by two plant nomenclatural codes the provisions of the international code of nomenclature for algae fungi and plants botanical code serve primarily scientific ends and the objectives of the scientific community while those of the international code of nomenclature for cultivated plants cultivated plant code are designed to serve both scientific and utilitarian ends by making provision for the names of plants used in commerce — the cultigens that have arisen in agriculture forestry and horticulture these names sometimes called variety names are not in latin but are added onto the scientific latin names and they assist communication among the community of foresters farmers and horticulturists the history of cultivated plant taxonomy can be traced from the first plant selections that occurred during the agrarian neolithic revolution to the first recorded naming of human plant selections by the romans the naming and classification of cultigens followed a similar path to that of all plants until the establishment of the first cultivated plant code in 1953 which formally established the cultigen classification category of cultivar since that time the classification and naming of cultigens has followed its own path cultivated plant taxonomy has been distinguished from the taxonomy of other plants in at least five ways firstly there is a distinction made according to where the plants are growing — that is whether they are wild or cultivated this is alluded to by the cultivated plant code which specifies in its title that it is dealing with cultivated plants secondly a distinction is made according to how the plants originated this is indicated in principle 2 of the cultivated plant code which defines the scope of the code as plants whose origin or selection is primarily due to the intentional actions of mankind — plants that have evolved under natural selection with human assistance thirdly cultivated plant taxonomy is concerned with plant variation that requires the use of special classification'
|
-| 32 | - 'starting point of calculation for simplification it is also common to constrain the first component of the jones vectors to be a real number this discards the overall phase information that would be needed for calculation of interference with other beams note that all jones vectors and matrices in this article employ the convention that the phase of the light wave is given by [UNK] k z − ω t displaystyle phi kzomega t a convention used by hecht under this convention increase in [UNK] x displaystyle phi x or [UNK] y displaystyle phi y indicates retardation delay in phase while decrease indicates advance in phase for example a jones vectors component of i displaystyle i e i π 2 displaystyle eipi 2 indicates retardation by π 2 displaystyle pi 2 or 90 degree compared to 1 e 0 displaystyle e0 collett uses the opposite definition for the phase [UNK] ω t − k z displaystyle phi omega tkz also collet and jones follow different conventions for the definitions of handedness of circular polarization jones convention is called from the point of view of the receiver while colletts convention is called from the point of view of the source the reader should be wary of the choice of convention when consulting references on the jones calculus the following table gives the 6 common examples of normalized jones vectors a general vector that points to any place on the surface is written as a ket ψ ⟩ displaystyle psi rangle when employing the poincare sphere also known as the bloch sphere the basis kets 0 ⟩ displaystyle 0rangle and 1 ⟩ displaystyle 1rangle must be assigned to opposing antipodal pairs of the kets listed above for example one might assign 0 ⟩ displaystyle 0rangle h ⟩ displaystyle hrangle and 1 ⟩ displaystyle 1rangle v ⟩ displaystyle vrangle these assignments are arbitrary opposing pairs are h ⟩ displaystyle hrangle and v ⟩ displaystyle vrangle d ⟩ displaystyle drangle and a ⟩ displaystyle arangle r ⟩ displaystyle rrangle and l ⟩ displaystyle lrangle the polarization of any point not equal to r ⟩ displaystyle rrangle or l ⟩ displaystyle lrangle and not on the circle that passes through h ⟩ d ⟩ v ⟩ a ⟩ displaystyle hrangle drangle vrangle arangle is known as elliptical polarization the jones matrices are operators that act on the jones vectors defined above these matrices are implemented by various optical elements such as lenses beam splitters mirrors etc each matrix represents projection onto a onedimensional'
- 'gloss is an optical property which indicates how well a surface reflects light in a specular mirrorlike direction it is one of the important parameters that are used to describe the visual appearance of an object other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables including gloss among the involved aspects the factors that affect gloss are the refractive index of the material the angle of incident light and the surface topography apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions when light illuminates an object it interacts with it in a number of ways absorbed within it largely responsible for colour transmitted through it dependent on the surface transparency and opacity scattered from or within it diffuse reflection haze and transmission specularly reflected from it glossvariations in surface texture directly influence the level of specular reflection objects with a smooth surface ie highly polished or containing coatings with finely dispersed pigments appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull the image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted substrate material type also influences the gloss of a surface nonmetallic materials ie plastics etc produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material metals do not suffer from this effect producing higher amounts of reflection at any angle the fresnel formula gives the specular reflectance r s displaystyle rs for an unpolarized light of intensity i 0 displaystyle i0 at angle of incidence i displaystyle i giving the intensity of specularly reflected beam of intensity i r displaystyle ir while the refractive index of the surface specimen is m displaystyle m the fresnel equation is given as follows r s i r i 0 displaystyle rsfrac iri0 r s 1 2 cos i − m 2 − sin 2 i cos i m 2 − sin 2 i 2 m 2 cos i − m 2 − sin 2 i m 2 cos i m 2 − sin 2 i 2 displaystyle rsfrac 12leftleftfrac cos isqrt m2sin'
- 'the black surroundings as compared to that with white surface and surroundings pfund was also the first to suggest that more than one method was needed to analyze gloss correctly in 1937 hunter as part of his research paper on gloss described six different visual criteria attributed to apparent gloss the following diagrams show the relationships between an incident beam of light i a specularly reflected beam s a diffusely reflected beam d and a nearspecularly reflected beam b specular gloss – the perceived brightness and the brilliance of highlights defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface sheen – the perceived shininess at low grazing angles defined as the gloss at grazing angles of incidence and viewing contrast gloss – the perceived brightness of specularly and diffusely reflecting areas defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface absence of bloom – the perceived cloudiness in reflections near the specular direction defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light haze is the inverse of absenceofbloom distinctness of image gloss – identified by the distinctness of images reflected in surfaces defined as the sharpness of the specularly reflected light surface texture gloss – identified by the lack of surface texture and surface blemishesdefined as the uniformity of the surface in terms of visible texture and defects orange peel scratches inclusions etc a surface can therefore appear very shiny if it has a welldefined specular reflectance at the specular angle the perception of an image reflected in the surface can be degraded by appearing unsharp or by appearing to be of low contrast the former is characterised by the measurement of the distinctnessofimage and the latter by the haze or contrast gloss in his paper hunter also noted the importance of three main factors in the measurement of gloss the amount of light reflected in the specular direction the amount and way in which the light is spread around the specular direction the change in specular reflection as the specular angle changesfor his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type later studies however by hunter and judd in 1939 on a larger number of painted samples concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation standardisation in gloss measurement was led by hunter and astm american society for testing and materials who produced astm d523 standard'
|
-| 19 | - 'to neurological dysfunction and other health problemsthis condition is inherited in an autosomal recessive pattern which means both copies of the gene have the mutation the parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene but they typically do not show signs and symptoms of the condition diagnosis of this disorder depends on blood tests demonstrating the absence of serum ceruloplasmin combined with low serum copper concentration low serum iron concentration high serum ferritin concentration or increased hepatic iron concentration mri scans can also confirm a diagnosis abnormal low intensities can indicate iron accumulation in the brain children of affected individuals are obligate carriers for aceruloplasminemia if the cp mutations has been identified in a related individual prenatal testing is recommended siblings of those affected by the disease are at a 25 of aceruloplasminemia in asymptomatic siblings serum concentrations of hemoglobin and hemoglobin a1c should be monitoredto prevent the progression of symptoms of the disease annual glucose tolerance tests beginning in early teen years to evaluate the onset of diabetes mellitus those at risk should avoid taking iron supplements treatment includes the use of iron chelating agents such as desferrioxamine to lower brain and liver iron stores and to prevent progression of neurologic symptoms this combined with freshfrozen human plasma ffp works effectively in decreasing liver iron content repetitive use of ffp can even improve neurologic symptoms antioxidants such as vitamin e can be used simultaneously to prevent tissue damage to the liver and pancreas human iron metabolism iron overload disorder'
- 'a bile duct is any of a number of long tubelike structures that carry bile and is present in most vertebrates bile is required for the digestion of food and is secreted by the liver into passages that carry bile toward the hepatic duct it joins the cystic duct carrying bile to and from the gallbladder to form the common bile duct which then opens into the intestine the top half of the common bile duct is associated with the liver while the bottom half of the common bile duct is associated with the pancreas through which it passes on its way to the intestine it opens into the part of the intestine called the duodenum via the ampulla of vater the biliary tree see below is the whole network of various sized ducts branching through the liver the path is as follows bile canaliculi → canals of hering → interlobular bile ducts → intrahepatic bile ducts → left and right hepatic ducts merge to form → common hepatic duct exits liver and joins → cystic duct from gall bladder forming → common bile duct → joins with pancreatic duct → forming ampulla of vater → enters duodenum inflation of a balloon in the bile duct causes through the vagus nerve activation of the brain stem and the insular cortex prefrontal cortex and somatosensory cortex blockage or obstruction of the bile duct by gallstones scarring from injury or cancer prevents the bile from being transported to the intestine and the active ingredient in the bile bilirubin instead accumulates in the blood this condition results in jaundice where the skin and eyes become yellow from the bilirubin in the blood this condition also causes severe itchiness from the bilirubin deposited in the tissues in certain types of jaundice the urine will be noticeably darker and the stools will be much paler than usual this is caused by the bilirubin all going to the bloodstream and being filtered into the urine by the kidneys instead of some being lost in the stools through the ampulla of vater jaundice jaundice is commonly caused by conditions such as pancreatic cancer which causes blockage of the bile duct passing through the cancerous portion of the pancreas cholangiocarcinoma cancer of the bile ducts blockage by a stone in patients with gallstones and from scarring after injury to the bile duct during gallbladder removal drainage biliary drainage is performed with a'
- '##ing of skin and higher than normal gamma glutamyl transferase and alkaline phosphatase laboratory values they are in most cases located in the right hepatic lobe and are frequently seen as a single lesion their size ranges from 1 to 30 cm they can be difficult to diagnosis with imaging studies alone because it can be hard to tell the difference between hepatocellular adenoma focal nodular hyperplasia and hepatocellular carcinoma molecular categorization via biopsy and pathological analysis aids in both diagnosis and understanding prognosis particularly because hepatocellular adenomas have the potential to become malignant it is important to note percutaneous biopsy should be avoided because this method can lead to bleeding or rupture of the adenoma the best way to biopsy suspected hepatic adenoma is via open or laparoscopic excisional biopsybecause hepatocellular adenomas are so rare there are no clear guidelines for the best course of treatment the complications which include malignant transformation spontaneous hemorrhage and rupture are considered when determining the treatment approach estimates indicate approximately 2040 of hepatocellular adenomas will undergo spontaneous hemorrhage the evidence is not well elucidated but the best available data suggests that the risk of hepatocellular adenoma becoming hepatocellular carcinoma which is malignant liver tumor is 42 of all cases transformation to hepatocellular carcinoma is more common in men currently if the hepatic adenoma is 5 cm increasing in size symptomatic lesions has molecular markers associated with hcc transformation rising level of liver tumor markers such as alpha fetoprotein the patient is a male or has a glycogen storage disorder the adenoma is recommended to be surgically removed like most liver tumors the anatomy and location of the adenoma determines whether the tumor can removed laparoscopically or if it requires an open surgical procedure hepatocellular adenomas are also known to decrease in size when there is decreased estrogen or steroids eg when estrogencontaining contraceptives steroids are stopped or postpartumwomen of childbearing age with hepatic adenomas were previously recommended to avoid becoming pregnant altogether however currently a more individualized approach is recommended that takes into account the size of the adenoma and whether surgical resection is possible prior to becoming pregnant currently there is a clinical trial called the pregnancy and liver adenoma management palm study that'
|
-| 36 | - 'actions they refer to for example buzz hullabaloo bling opening statement — first part of discourse should gain audiences attention orator — a public speaker especially one who is eloquent or skilled oxymoron — opposed or markedly contradictory terms joined for emphasis panegyric — a formal public speech delivered in high praise of a person or thing paradeigma — argument created by a list of examples that leads to a probable generalized idea paradiastole — redescription usually in a better light paradox — an apparently absurd or selfcontradictory statement or proposition paralipsis — a form of apophasis when a rhetor introduces a subject by denying it should be discussed to speak of someone or something by claiming not to parallelism — the correspondence in sense or construction of successive clauses or passages parallel syntax — repetition of similar sentence structures paraprosdokian — a sentence in which the latter half takes an unexpected turn parataxis — using juxtaposition of short simple sentences to connect ideas as opposed to explicit conjunction parenthesis — an explanatory or qualifying word clause or sentence inserted into a passage that is not essential to the literal meaning parody — comic imitation of something or somebody paronomasia — a pun a play on words often for humorous effect pathos — the emotional appeal to an audience in an argument one of aristotles three proofs periphrasis — the substitution of many or several words where one would suffice usually to avoid using that particular word personification — a figure of speech that gives human characteristics to inanimate objects or represents an absent person as being present for example but if this invincible city should now give utterance to her voice would she not speak as follows rhetorica ad herennium petitio — in a letter an announcement demand or request philippic — a fiery damning speech delivered to condemn a particular political actor the term is derived from demostheness speeches in 351 bc denouncing the imperialist ambitions of philip of macedon which later came to be known as the philippics phronesis — practical wisdom common sense pistis — the elements to induce true judgment through enthymemes hence to give proof of a statement pleonasm — the use of more words than necessary to express an idea polyptoton — the repetition of a word or root in different cases or inflections within the same sentence polysemy — the capacity of a word or phrase to render more than one meaning polysyndeton — the repeated use of conjunctions within'
- 'a workable body of law thus canadas legal system may have more potential for conflicts with regards to the accusation of judicial activism as compared to the united statesformer chief justice of the supreme court of canada beverley mclachlin has stated that the charge of judicial activism may be understood as saying that judges are pursuing a particular political agenda that they are allowing their political views to determine the outcome of cases before them it is a serious matter to suggest that any branch of government is deliberately acting in a manner that is inconsistent with its constitutional role1such accusations often arise in response to rulings involving the canadian charter of rights and freedoms specifically rulings that have favoured the extension of gay rights have prompted accusations of judicial activism justice rosalie abella is a particularly common target of those who perceive activism on the supreme court of canada benchthe judgment chaoulli v quebec 2005 1 rcs which declared unconstitutional the prohibition of private healthcare insurance and challenged the principle of canadian universal health care in quebec was deemed by many as a prominent example of judicial activism the judgment was written by justice deschamps with a tight majority of 4 against 3 in the cassis de dijon case the european court of justice ruled the german laws prohibiting sales of liquors with alcohol percentages between 15 and 25 conflicted with eu laws this ruling confirmed that eu law has primacy over memberstate law when the treaties are unclear they leave room for the court to interpret them in different ways when eu treaties are negotiated it is difficult to get all governments to agree on a clear set of laws in order to get a compromise governments agree to leave a decision on an issue to the courtthe court can only practice judicial activism to the extent the eu governments leave room for interpretation in the treatiesthe court makes important rulings that set the agenda for further eu integration but it cannot happen without the consensual support of the memberstatesin the irish referendum on the lisbon treaty many issues not directly related to the treaty such as abortion were included in the debate because of worries that the lisbon treaty will enable the european court of justice to make activist rulings in these areas after the rejection of the lisbon treaty in ireland the irish government received concessions from the rest of the member states of the european union to make written guarantees that the eu will under no circumstances interfere with irish abortion taxation or military neutrality ireland voted on the lisbon treaty a second time in 2009 with a 6713 majority voting yes to the treaty india has a recent history of judicial activism originating after the emergency in india which saw attempts by the government to control the judiciary public interest'
- 'within the field of rhetoric the contributions of female rhetoricians have often been overlooked anthologies comprising the history of rhetoric or rhetoricians often leave the impression there were none throughout history however there have been a significant number of women rhetoricians [UNK] — the act of looking back of seeing with fresh eyes of entering an old text from a new critical direction — is for women more than a chapter in cultural history it is an act of survival adrienne rich the following is a timeline of contributions made to the field of rhetoric by women aspasia c 410 bc was a milesian woman who was known and highly regarded for her teaching of political theory and rhetoric she is mentioned in platos memexenus and is often credited with teaching the socratic method to socrates diotima of mantinea 4th century bc is an important character in platos symposium it is uncertain if she was a real person or perhaps a character modelled after aspasia for whom plato had much respect julian of norwich 1343 – 1415 english mystic who challenged the teachings of medieval christianity in regard to womens inferior role in religionrevelations of divine lovecatherine of siena 1347 – 1380 italian who was influential through her writings to men and women in authority where she begged for peace in italy and for the return of the papacy to rome she was canonized in 1461 by pope pius iiletter 83 to mona lapa her mother in siena 1376christine de pizan 1365 – 1430 venetian who moved to france at an early age she was influential as a writer rhetorician and critic during the medieval period and was europes first female professional authorthe book of the city of ladies 1404margery kempe 1373 – 1439 british woman who could neither read nor write but dictated her life story the book of margery kempe after receiving a vision of christ during the birth of the first of her fourteen children from the 15th century kempe was viewed as a holy woman after her book was published in pamphlet form with any thought or behavior that could be viewed as nonconforming or unorthodox removed when the original was rediscovered in 1934 a more complex selfportrait emergedthe book of margery kempe 1436 laura cereta 1469 – 1499 italian humanist and feminist who was influential in the letters she wrote to other intellectuals through her letters she fought for womens right to education and against the oppression of married womenletter to bibulus sempronius defense of the liberal instruction of women 1488 margaret fell 1614'
|
-| 42 | - 'virus siv a virus similar to hiv is capable of infecting primates the epstein – barr virus ebv is one of eight known herpesviruses it displays host tropism for human b cells through the cd21gp350220 complex and is thought to be the cause of infectious mononucleosis burkitts lymphoma hodgkins disease nasopharyngeal carcinoma and lymphomas ebv enters the body through oral transfer of saliva and it is thought to infect more than 90 of the worlds adult population ebv may also infect epithelial cells t cells and natural killer cells through mechanisms different than the cd21 receptormediated process in b cells the zika virus is a mosquitoborne arbovirus in the genus flavivirus that exhibits tropism for the human maternal decidua the fetal placenta and the umbilical cord on the cellular level the zika virus targets decidual macrophages decidual fibroblasts trophoblasts hofbauer cells and mesenchymal stem cells due to their increased capacity to support virion replication in adults infection by the zika virus may lead to zika fever and if the infection occurs during the first trimester of pregnancy neurological complications such as microcephaly may occur mycobacterium tuberculosis is a humantropic bacterium that causes tuberculosis the second most common cause of death due to an infectious agent the cell envelope glycoconjugates surrounding m tuberculosis allow the bacteria to infect human lung tissue while providing an intrinsic resistance to pharmaceuticals m tuberculosis enters the lung alveoler passages through aerosol droplets and it then becomes phagocytosed by macrophages however since the macrophages are unable to completely kill m tuberculosis granulomas are formed within the lungs providing an ideal environment for continued bacterial colonization more than an estimated 30 of the world population is colonized by staphylococcus aureus a microorganism capable of causing skin infections nosocomial infections and food poisoning due to its tropism for human skin and soft tissue the s aureus clonal complex cc121 is known to exhibit multihost tropism for both humans and rabbits this is thought to be due to a single nucleotide mutation that evolved the cc121 complex into st121 clonal complex the clone capable of infecting rabbits enteropathogenic and enterohaemorrhagic escherichia'
- 'all oncoviruses are dna viruses some rna viruses have also been associated such as the hepatitis c virus as well as certain retroviruses eg human tlymphotropic virus htlv1 and rous sarcoma virus rsv estimated percent of new cancers attributable to the virus worldwide in 2002 na indicates not available the association of other viruses with human cancer is continually under research the main viruses associated with human cancers are the human papillomavirus the hepatitis b and hepatitis c viruses the epstein – barr virus the human tlymphotropic virus the kaposis sarcomaassociated herpesvirus kshv and the merkel cell polyomavirus experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans exceeded only by tobacco usage the mode of virally induced tumors can be divided into two acutely transforming or slowly transforming in acutely transforming viruses the viral particles carry a gene that encodes for an overactive oncogene called viraloncogene vonc and the infected cell is transformed as soon as vonc is expressed in contrast in slowly transforming viruses the virus genome is inserted especially as viral genome insertion is an obligatory part of retroviruses near a protooncogene in the host genome the viral promoter or other transcription regulation elements in turn cause overexpression of that protooncogene which in turn induces uncontrolled cellular proliferation because viral genome insertion is not specific to protooncogenes and the chance of insertion near that protooncogene is low slowly transforming viruses have very long tumor latency compared to acutely transforming viruses which already carry the viral oncogenehepatitis viruses including hepatitis b and hepatitis c can induce a chronic viral infection that leads to liver cancer in 047 of hepatitis b patients per year especially in asia less so in north america and in 14 of hepatitis c carriers per year liver cirrhosis whether from chronic viral hepatitis infection or alcoholism is associated with the development of liver cancer and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development worldwide liver cancer is one of the most common and most deadly cancers due to a huge burden of viral hepatitis transmission and diseasethrough advances in cancer research vaccines designed to prevent cancer have been created the hepatitis b vaccine is the first vaccine that has been established to prevent cancer hepatocellular carcinoma by preventing infection with the causative'
- 'gisaid the global initiative on sharing all influenza data previously the global initiative on sharing avian influenza data is a global science initiative established in 2008 to provide access to genomic data of influenza viruses the database was expanded to include the coronavirus responsible for the covid19 pandemic as well as other pathogens the database has been described as the worlds largest repository of covid19 sequences gisaid facilitates genomic epidemiology and realtime surveillance to monitor the emergence of new covid19 viral strains across the planetsince its establishment as an alternative to sharing avian influenza data via conventional publicdomain archives gisaid has facilitated the exchange of outbreak genome data during the h1n1 pandemic in 2009 the h7n9 epidemic in 2013 the covid19 pandemic and the 2022 – 2023 mpox outbreak since 1952 influenza strains had been collected by national influenza centers nics and distributed through the whos global influenza surveillance and response system gisrs countries provided samples to the who but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples beginning in january 2006 italian researcher ilaria capua refused to upload her data to a closed database and called for genomic data on h5n1 avian influenza to be in the public domain at a conference of the oiefao network of expertise on animal influenza capua persuaded participants to agree to each sequence and release data on 20 strains of influenza some scientists had concerns about sharing their data in case others published scientific papers using the data before them but capua dismissed this telling science what is more important another paper for ilaria capuas team or addressing a major health threat lets get our priorities straight peter bogner a german in his 40s based in the usa and who previously had no experience in public health read an article about capuas call and helped to found and fund gisaid bogner met nancy cox who was then leading the us centers for disease controls influenza division at a conference and cox went on to chair gisaids scientific advisory councilthe acronym gisaid was coined in a correspondence letter published in the journal nature in august 2006 putting forward an initial aspiration of creating a consortium for a new global initiative on sharing avian influenza data later all would replace avian whereby its members would release data in publicly available databases up to six months after analysis and validation initially the organisation collaborated with the australian nonprofit organization cambia and the creative commons project science commons although no essential ground rules for sharing were established the'
|
-| 2 | - 'the complex roots to any precision uspenskys algorithm of collins and akritas improved by rouillier and zimmermann and based on descartes rule of signs this algorithms computes the real roots isolated in intervals of arbitrary small width it is implemented in maple functions fsolve and rootfindingisolate there are at least four software packages which can solve zerodimensional systems automatically by automatically one means that no human intervention is needed between input and output and thus that no knowledge of the method by the user is needed there are also several other software packages which may be useful for solving zerodimensional systems some of them are listed after the automatic solvers the maple function rootfindingisolate takes as input any polynomial system over the rational numbers if some coefficients are floating point numbers they are converted to rational numbers and outputs the real solutions represented either optionally as intervals of rational numbers or as floating point approximations of arbitrary precision if the system is not zero dimensional this is signaled as an error internally this solver designed by f rouillier computes first a grobner basis and then a rational univariate representation from which the required approximation of the solutions are deduced it works routinely for systems having up to a few hundred complex solutions the rational univariate representation may be computed with maple function groebnerrationalunivariaterepresentation to extract all the complex solutions from a rational univariate representation one may use mpsolve which computes the complex roots of univariate polynomials to any precision it is recommended to run mpsolve several times doubling the precision each time until solutions remain stable as the substitution of the roots in the equations of the input variables can be highly unstable the second solver is phcpack written under the direction of j verschelde phcpack implements the homotopy continuation method this solver computes the isolated complex solutions of polynomial systems having as many equations as variables the third solver is bertini written by d j bates j d hauenstein a j sommese and c w wampler bertini uses numerical homotopy continuation with adaptive precision in addition to computing zerodimensional solution sets both phcpack and bertini are capable of working with positive dimensional solution sets the fourth solver is the maple library regularchains written by marc morenomaza and collaborators it contains various functions for solving polynomial systems by means of regular chains elimination theory systems of polynomial inequalities triangular decomposition wus method of characteristic set'
- '##duality is the irrelevance of de morgans laws those laws are built into the syntax of the primary algebra from the outset the true nature of the distinction between the primary algebra on the one hand and 2 and sentential logic on the other now emerges in the latter formalisms complementationnegation operating on nothing is not wellformed but an empty cross is a wellformed primary algebra expression denoting the marked state a primitive value hence a nonempty cross is an operator while an empty cross is an operand because it denotes a primitive value thus the primary algebra reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action the making of a distinction syllogisms appendix 2 of lof shows how to translate traditional syllogisms and sorites into the primary algebra a valid syllogism is simply one whose primary algebra translation simplifies to an empty cross let a denote a literal ie either a or a [UNK] displaystyle overline a indifferently then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of barbara whose primary algebra equivalent is a ∗ b [UNK] b [UNK] c ∗ [UNK] a ∗ c ∗ displaystyle overline a b overline overline b cbig a c these 24 possible permutations include the 19 syllogistic forms deemed valid in aristotelian and medieval logic this primary algebra translation of syllogistic logic also suggests that the primary algebra can interpret monadic and term logic and that the primary algebra has affinities to the boolean term schemata of quine 1982 part ii the following calculation of leibnizs nontrivial praeclarum theorema exemplifies the demonstrative power of the primary algebra let c1 be a [UNK] [UNK] displaystyle overline overline abig a c2 be a a b [UNK] a b [UNK] displaystyle a overline a ba overline b c3 be [UNK] a [UNK] displaystyle overline aoverline j1a be a [UNK] a [UNK] displaystyle overline a aoverline and let oi mean that variables and subformulae have been reordered in a way that commutativity and associativity permit the primary algebra embodies a point noted by huntington in 1933 boolean algebra requires in addition to one unary operation one and not two binary operations hence the seldomnoted fact that boolean algebra'
- '##n and company 1925 pp 477ff reprinted 1958 by dover publications'
|
-| 39 | - 'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'
- 'a thermodynamic instrument is any device for the measurement of thermodynamic systems in order for a thermodynamic parameter or physical quantity to be truly defined a technique for its measurement must be specified for example the ultimate definition of temperature is what a thermometer reads the question follows – what is a thermometer there are two types of thermodynamic instruments the meter and the reservoir a thermodynamic meter is any device which measures any parameter of a thermodynamic system a thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system two general complementary tools are the meter and the reservoir it is important that these two types of instruments are distinct a meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure if for example a thermometer were to act as a temperature reservoir it would alter the temperature of the system being measured and the reading would be incorrect ideal meters have no effect on the state variables of the system they are measuring a meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer the nature of its contact with the system it is measuring can be controlled and it is sufficiently small that it does not appreciably affect the state of the system being measured the theoretical thermometer described below is just such a meter in some cases the thermodynamic parameter is actually defined in terms of an idealized measuring instrument for example the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body they are also in thermal equilibrium with each other this principle as noted by james maxwell in 1872 asserts that it is possible to measure temperature an idealized thermometer is a sample of an ideal gas at constant pressure from the ideal gas law the volume of such a sample can be used as an indicator of temperature in this manner it defines temperature although pressure is defined mechanically a pressuremeasuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature a calorimeter is a device which is used to measure and define the internal energy of a system some common thermodynamic meters are thermometer a device which measures temperature as described above barometer a device which measures pressure an ideal gas barometer may be constructed by mechanically connecting an ideal gas to the system being'
- 'a transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states in particular for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour andor supercritical conditions during the expansion phase the ultrasupercritical steam rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels where water is used as working fluid other typical applications of transcritical cycles to the purpose of power generation are represented by organic rankine cycles which are especially suitable to exploit low temperature heat sources such as geothermal energy heat recovery applications or waste to energy plants with respect to subcritical cycles the transcritical cycle exploits by definition higher pressure ratios a feature that ultimately yields higher efficiencies for the majority of the working fluids considering then also supercritical cycles as a valid alternative to the transcritical ones the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work this evidences the extreme potential of transcritical cycles to the purpose of producing the most power measurable in terms of the cycle specific work with the least expenditure measurable in terms of spent energy to compress the working fluid while in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid in transcritical cycles one pressure level is above the critical pressure and the other is below in the refrigeration field carbon dioxide co2 is increasingly considered of interest as refrigerant in trascritical cycles the pressure of the working fluid at the outlet of the pump is higher than the critical pressure while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature during the heating phase which is typically considered an isobaric process the working fluid overcomes the critical temperature moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process a significant difference between subcritical and transcritical cycles due to this significant difference in the heating phase the heat injection into the cycle is significantly more efficient from a second law perspective since the average temperature difference between the hot source and the working fluid is reducedas a consequence the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics therefore the expansion process can be accomplished exploiting higher pressure ratios which yields higher power production modern ultrasupercritical rankine cycles can reach maximum temperatures up to 620°c exploiting the optimized heat introduction process as in'
|
-| 27 | - 'area of research that is being looked into with regards to loc is with home security automated monitoring of volatile organic compounds vocs is a desired functionality for loc if this application becomes reliable these microdevices could be installed on a global scale and notify homeowners of potentially dangerous compounds labonachip devices could be used to characterize pollen tube guidance in arabidopsis thaliana specifically plant on a chip is a miniaturized device in which pollen tissues and ovules could be incubated for plant sciences studies biochemical assays dielectrophoresis detection of cancer cells and bacteria immunoassay detect bacteria viruses and cancers based on antigenantibody reactions ion channel screening patch clamp microfluidics microphysiometry organonachip realtime pcr detection of bacteria viruses and cancers testing the safety and efficacy of new drugs as with lung on a chip total analysis system booksgeschke klank telleman eds microsystem engineering of labonachip devices 1st ed john wiley sons isbn 3527307338 herold ke rasooly a eds 2009 labonachip technology fabrication and microfluidics caister academic press isbn 9781904455462 herold ke rasooly a eds 2009 labonachip technology biomolecular separation and analysis caister academic press isbn 9781904455479 yehya h ghallab wael badawy 2010 labonachip techniques circuits and biomedical applications artech house p 220 isbn 9781596934184 2012 gareth jenkins colin d mansfield eds methods in molecular biology – microfluidic diagnostics humana press isbn 9781627031332'
- 'mentioned before this poses extremely negative environmental implications while also demonstrating the high waste associated with conventional fertilizers on the other hand nanofertilizers are able to amend this issue because of their high absorption efficiency into the targeted plant which is owed to their remarkably high surface area to volume ratios in a study done on the use of phosphorus nanofertilizers absorption efficiencies of up to 906 were achieved making them a highly desirable fertilizer material another beneficial aspect of using nanofertilizers is the ability to provide slow release of nutrients into the plant over a 4050 day time period rather than the 410 day period of conventional fertilizers this again proves to be beneficial economically requiring less resources to be devoted to fertilizer transport and less amount of total fertilizer needed as expected with greater ability for nutrient uptake crops have been found to exhibit greater health when using nanofertilizers over conventional ones one study analyzed the effect of a potatospecific nano fertilizer composed of a variety of elements including k p n and mg in comparison to a control group using their conventional counterparts the study found that the potato crop which used the nanofertilizer had an increased crop yield in comparison to the control as well as more efficient water use and agronomic efficiency defined as units of yield increased per unit of nutrient applied in addition the study found that the nano fertilized potatoes had a higher nutrient content such as increased starch and ascorbic acid content another study analyzed the use of ironbased nanofertilizers in black eyed peas and determined that root stability increased dramatically in the use of nano fertilizer as well as chlorophyll content in leaves thus improving photosynthesis a different study found that zinc nanofertilizers enhanced photosynthesis rate in maize crops measured through soluble carbohydrate concentration likely as a result of the role of zinc in the photosynthesis processmuch work needs to be done in the future to make nanofertilizers a consistent viable alternative to conventional fertilizers effective legislation needs to be drafted regulating the use of nanofertilizers drafting standards for consistent quality and targeted release of nutrients further more studies need to be done to understand the full benefits and potential downsides of nanofertilizers to gain the full picture in approach of using nanotechnology to benefit agriculture in an everchanging world nanotechnology has played a pivotal role in the field of genetic engineering and plant transformations making it a desirable candidate in the optimization'
- '##s graphene metals oxides soft materials up to microns nanocellulose polyelectrolyte including nanoparticles applications including thin film solar cells barrier coatings including antireflective coatings antimicrobial surfaces selfcleaning glass plasmonic metamaterials electroswitching surfaces layerbylayer assembly and graphene'
|
-| 24 | - 'in the wall street journals review of the best architecture of 2018 with julie v iovine writing that glenstones architecture takes an approach that offers a sequence of events revealed gradually with constantly shifting perspectives as opposed to classic modernisms tightly controlled image of architecture as geometric tableau in 2020 the expansion was a winner of the american institute of architects architecture awardsin 2019 glenstone opened a 7200squarefoot 670 m2 environmental center on its campus the building contains selfguided exhibits about recycling composting and reforestation the pavilions is built around the water court an 18000squarefoot 1700 m2 water garden containing thousands of aquatic plants such as waterlilies irises thalias cattails and rushes the water courts design was inspired by the reflecting pool at the brion cemetery in northern italy referring to the way the museum returns visitors to the water court samuel medina wrote for metropolis art isnt the heart of the glenstone museum which opened in october water is pulitzer prizewinning critic sebastian smee wrote of the water courtits as if youve entered a beautiful sanctuary possibly in another hemisphere maybe another era although youve descended you actually feel a kind of lift a buoyancy such as what birds must feel when they catch warm air currents you exhale you feel liberated from everyday cares youre ready for the art the expansion also added 130 acres 53 ha of land to the campus a landscape largely composed of woodland and wildflower meadows the landscaping was designed by landscape architect peter walkers firm pwp landscape architecture the effort included the planting of about 8000 trees the transplanting of 200 trees the converting lawn areas to meadows and the restoration of streams that flowed through the campus glenstones landscaping is managed using organic products only this outdoor space hosts large art installations by artists including jeff koons felix gonzaleztorres michael heizer and richard serra in a review for the washington post in 2018 philip kennicott wrote that glenstone is a mustsee museum and that its creators successfully integrate art architecture and landscape referring to the natural setting of the museum he wrote that everything is quietly spectacular with curated views to the outdoors that present nature as visual haiku kennicott tempered his review by mentioning that the museums distinctive architecture and layout continually confront visitors with strange visions that will make it interesting to see how it is receivedkriston capps of washington city paper called glenstones 2018 expansion successful and enchanting with a sublime viewing experience he wrote that the museums collection excels in its focus on conventional paintings sculptures and installations but excludes more modern media such as video or performance art concerning this conservative focus cap'
- 'the slope geotextiles have been used to protect the fossil hominid footprints of laetoli in tanzania from erosion rain and tree rootsin building demolition geotextile fabrics in combination with steel wire fencing can contain explosive debriscoir coconut fiber geotextiles are popular for erosion control slope stabilization and bioengineering due to the fabrics substantial mechanical strength app ie coir geotextiles last approximately 3 to 5 years depending on the fabric weight the product degrades into humus enriching the soil glacial retreat geotextiles with reflective properties are often used in protecting the melting glaciers in north italy they use geotextiles to cover the glaciers for protecting from the sun the reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process however this process has proven to be more expensive than effective while many possible design methods or combinations of methods are available to the geotextile designer the ultimate decision for a particular application usually takes one of three directions design by cost and availability design by specification or design by function extensive literature on design methods for geotextiles has been published in the peer reviewed journal geotextiles and geomembranes geotextiles are needed for specific requirements just as anything else in the world some of these requirements consist of polymers composed of a minimum of 85 by weight polypropylene polyesters polyamides polyolefins and polyethylene geomembrane hard landscape materials polypropylene raffia sediment control john n w m 1987 geotextiles glasgow blackie publishing ltd koerner r m 2012 designing with geosynthetics 6th edition xlibris publishing co koerner r m ed 2016 geotextiles from design to applications amsterdam woodhead publishing co'
- 'society or the california native plant society which are made up of gardeners interested in growing plants local to their area state or country in the united states wild ones — native plants natural landscapes is a national organization with local chapters in many states new england wildflower society and lady bird johnson wildflower center provide information on native plants and promote natural landscaping these organizations can be the best resources for learning about and obtaining local native plants many members have spent years or decades cultivating local plants or bushwalking in local areas permaculture organic lawn management piet oudolf terroir wildlife gardening xeriscaping north american native plant society christopher thomas ed 2011 the new american landscape leading voices on the future of sustainable gardening timber press isbn 9781604691863 diekelmann john robert m schuster 2002 natural landscaping designing with native plant communities university of wisconsin press isbn 9780299173241 stein sara 1993 noahs garden restoring the ecology of our own back yards houghtonmifflin isbn 0395653738 stein sara 1997 planting noahs garden further adventures in backyard ecology houghtonmifflin isbn 9780395709603 tallamy douglas w 2007 bringing nature home how native plants sustain wildlife in our gardens timber press isbn 9780881928549 tallamy douglas w 2020 natures best hope a new approach to conservation that starts in your yard timber press isbn 9781604699005 wasowski andy and sally 2000 the landscaping revolution garden with mother nature not against her contemporary books isbn 9780809226658 wasowski sally 2001 gardening with prairie plants how to create beautiful native landscapes university of minnesota press isbn 0816630879'
|
-| 9 | - 'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'
- 'methods are carried out on the distance matrices an important point is that the scale of data is extensive and further approaches must be taken to identify patterns from the available information tools used to analyze the data include vamps qiime mothur and dada2 or unoise3 for denoising metagenomics is also used extensively for studying microbial communities in metagenomic sequencing dna is recovered directly from environmental samples in an untargeted manner with the goal of obtaining an unbiased sample from all genes of all members of the community recent studies use shotgun sanger sequencing or pyrosequencing to recover the sequences of the reads the reads can then be assembled into contigs to determine the phylogenetic identity of a sequence it is compared to available full genome sequences using methods such as blast one drawback of this approach is that many members of microbial communities do not have a representative sequenced genome but this applies to 16s rrna amplicon sequencing as well and is a fundamental problem with shotgun sequencing it can be resolved by having a high coverage 50100x of the unknown genome effectively doing a de novo genome assembly as soon as there is a complete genome of an unknown organism available it can be compared phylogenetically and the organism put into its place in the tree of life by creating new taxa an emerging approach is to combine shotgun sequencing with proximityligation data hic to assemble complete microbial genomes without culturingdespite the fact that metagenomics is limited by the availability of reference sequences one significant advantage of metagenomics over targeted amplicon sequencing is that metagenomics data can elucidate the functional potential of the community dna targeted gene surveys cannot do this as they only reveal the phylogenetic relationship between the same gene from different organisms functional analysis is done by comparing the recovered sequences to databases of metagenomic annotations such as kegg the metabolic pathways that these genes are involved in can then be predicted with tools such as mgrast camera and imgm metatranscriptomics studies have been performed to study the gene expression of microbial communities through methods such as the pyrosequencing of extracted rna structure based studies have also identified noncoding rnas ncrnas such as ribozymes from microbiota metaproteomics is an approach that studies the proteins expressed by microbiota giving insight into its functional potential the human microbiome project launched in 2008 was a united states national institutes of health initiative to identify and characterize microorganisms found in both healthy and diseased humans'
- 'by crosslinking the cytoskeleton protein actin burkholderia pseudomallei and edwardsiella tarda are two other organisms which possess a t6ss that appears dedicated for eukaryotic targeting the t6ss of plant pathogen xanthomonas citri protects it from predatory amoeba dictyostelium discoideum a wide range of gramnegative bacteria have been shown to have antibacterial t6sss including opportunistic pathogens such as pseudomonas aeruginosa obligate commensal species that inhabit the human gut bacteroides spp and plantassociated bacteria such as agrobacterium tumefaciens these systems exert antibacterial activity via the function of their secreted substrates all characterized bacterialtargeting t6ss proteins act as toxins either by killing or preventing the growth of target cells the mechanisms of toxicity toward target cells exhibited by t6ss substrates are diverse but typically involve targeting of highly conserved bacterial structures including degradation of the cell wall through amidase or glycohydrolase activity disruption of cell membranes through lipase activity or pore formation cleavage of dna and degradation of the essential metabolite nad t6sspositive bacterial species prevent t6ssmediated intoxication towards self and kin cells by producing immunity proteins specific to each secreted toxin the immunity proteins function by binding to the toxin proteins often at their active site thereby blocking their activity some research has gone into regulation of t6ss by two component systems in p aeruginosa it has been observed that the gacsrsm twocomponent system is involved in type vi secretion system regulation this system regulates the expression of rsm small regulatory rna molecules and has also been implicated in biofilm formation upon the gacsrsm pathway stimulation an increase in rsm molecules leads to inhibition of mrnabinding protein rsma rsma is a translational inhibitor that binds to sequences near the ribosomebinding site for t6ss gene expression this level of regulation has also been observed in p fluorescens and p syringae there are various examples in which quorum sensing regulates t6ss in vibrio cholerae t6ss studies it has been observed that serotype o37 has high vas gene expression serotypes o139 and o1 on the other hand exhibit the opposite with markedly low vas gene expression it has been suggested that the differences in expression are attributable to differences in'
|
-| 8 | - 'in radio communication and avionics a conformal antenna or conformal array is a flat array antenna which is designed to conform or follow some prescribed shape for example a flat curving antenna which is mounted on or embedded in a curved surface it consists of multiple individual antennas mounted on or in the curved surface which work together as a single antenna to transmit or receive radio waves conformal antennas were developed in the 1980s as avionics antennas integrated into the curving skin of military aircraft to reduce aerodynamic drag replacing conventional antenna designs which project from the aircraft surface military aircraft and missiles are the largest application of conformal antennas but they are also used in some civilian aircraft military ships and land vehicles as the cost of the required processing technology comes down they are being considered for use in civilian applications such as train antennas car radio antennas and cellular base station antennas to save space and also to make the antenna less visually intrusive by integrating it into existing objects conformal antennas are a form of phased array antenna they are composed of an array of many identical small flat antenna elements such as dipole horn or patch antennas covering the surface at each antenna the current from the transmitter passes through a phase shifter device which are all controlled by a microprocessor computer by controlling the phase of the feed current the nondirectional radio waves emitted by the individual antennas can be made to combine in front of the antenna by the process of interference forming a strong beam or beams of radio waves pointed in any desired direction in a receiving antenna the weak individual radio signals received by each antenna element are combined in the correct phase to enhance signals coming from a particular direction so the antenna can be made sensitive to the signal from a particular station and reject interfering signals from other directions in a conventional phased array the individual antenna elements are mounted on a flat surface in a conformal antenna they are mounted on a curved surface and the phase shifters also compensate for the different phase shifts caused by the varying path lengths of the radio waves due to the location of the individual antennas on the curved surface because the individual antenna elements must be small conformal arrays are typically limited to high frequencies in the uhf or microwave range where the wavelength of the waves is small enough that small antennas can be used'
- 'autopilot are tightly controlled and extensive test procedures are put in place some autopilots also use design diversity in this safety feature critical software processes will not only run on separate computers and possibly even using different architectures but each computer will run software created by different engineering teams often being programmed in different programming languages it is generally considered unlikely that different engineering teams will make the same mistakes as the software becomes more expensive and complex design diversity is becoming less common because fewer engineering companies can afford it the flight control computers on the space shuttle used this design there were five computers four of which redundantly ran identical software and a fifth backup running software that was developed independently the software on the fifth system provided only the basic functions needed to fly the shuttle further reducing any possible commonality with the software running on the four primary systems a stability augmentation system sas is another type of automatic flight control system however instead of maintaining the aircraft required altitude or flight path the sas will move the aircraft control surfaces to damp unacceptable motions sas automatically stabilizes the aircraft in one or more axes the most common type of sas is the yaw damper which is used to reduce the dutch roll tendency of sweptwing aircraft some yaw dampers are part of the autopilot system while others are standalone systemsyaw dampers use a sensor to detect how fast the aircraft is rotating either a gyroscope or a pair of accelerometers a computeramplifier and an actuator the sensor detects when the aircraft begins the yawing part of dutch roll a computer processes the signal from the sensor to determine the rudder deflection required to damp the motion the computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it the dutch roll is damped and the aircraft becomes stable about the yaw axis because dutch roll is an instability that is inherent in all sweptwing aircraft most sweptwing aircraft need some sort of yaw damper there are two types of yaw damper the series yaw damper and the parallel yaw damper the actuator of a parallel yaw damper will move the rudder independently of the pilots rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant and will result in pedal movement when the rudder moves some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis the boeing b52 for example requires both pitch and yaw sas in order to provide a stable bombing'
- 'airground radiotelephone service is a system which allows voice calls and other communication services to be made from an aircraft to either a satellite or land based network the service operates via a transceiver mounted in the aircraft on designated frequencies in the us these frequencies have been allocated by the federal communications commission the system is used in both commercial and general aviation services licensees may offer a wide range of telecommunications services to passengers and others on aircraft a us airground radiotelephone transmits a radio signal in the 849 to 851 megahertz range this signal is sent to either a receiving ground station or a communications satellite depending on the design of the particular system commercial aviation airground radiotelephone service licensees operate in the 800 mhz band and can provide communication services to all aviation markets including commercial governmental and private aircraft if it is a call from a commercial airline passenger radiotelephone the call is then forwarded to a verification center to process credit card or calling card information the verification center will then route the call to the public switched telephone network which completes the call for the return signal ground stations and satellites use a radio signal in the 894 to 896 megahertz range two separate frequency bands have been allocated by the fcc for airground telephone service one at 454459 mhz was originally reserved for general aviation use nonairliners and the 800 mhz range primarily used for airliner telephone service which has shown limited acceptance by passengers att corporation abandoned its 800 mhz airground offering in 2005 and verizon airfone formerly gte airfone is scheduled for decommissioning in late 2008 although the fcc has reauctioned verizons spectrum see below skytel now defunct which had the third nationwide 800 mhz license elected not to build it but continued to operate in the 450 mhz agras system its agras license and operating network was sold to bell industries in april 2007 the 450 mhz general aviation network is administered by midamerica computer corporation in blair nebraska which has called the service agras and requires the use of instruments manufactured by terra and chelton aviationwulfsberg electronics and marketed as the flitephone vi series general aviation airground radiotelephone service licensees operate in the 450 mhz band and can provide a variety of telecommunications services to private aircraft such as small single engine planes and corporate jetsin the 800 mhz band the fcc defined 10 blocks of paired uplinkdownlink narrowband ranges 6 khz and six control ranges 32 khz six carriers were licensed to offer inflight telephony each being granted nonex'
|
-| 25 | - 'given a finite number of vectors x 1 x 2 … x n displaystyle x1x2dots xn in a real vector space a conical combination conical sum or weighted sum of these vectors is a vector of the form α 1 x 1 α 2 x 2 [UNK] α n x n displaystyle alpha 1x1alpha 2x2cdots alpha nxn where α i displaystyle alpha i are nonnegative real numbers the name derives from the fact that the set of all conical sum of vectors defines a cone possibly in a lowerdimensional subspace the set of all conical combinations for a given set s is called the conical hull of s and denoted cones or conis that is coni s [UNK] i 1 k α i x i x i ∈ s α i ∈ r ≥ 0 k ∈ n displaystyle operatorname coni sleftsum i1kalpha ixixiin salpha iin mathbb r geq 0kin mathbb n right by taking k 0 it follows the zero vector origin belongs to all conical hulls since the summation becomes an empty sum the conical hull of a set s is a convex set in fact it is the intersection of all convex cones containing s plus the origin if s is a compact set in particular when it is a finite nonempty set of points then the condition plus the origin is unnecessary if we discard the origin we can divide all coefficients by their sum to see that a conical combination is a convex combination scaled by a positive factor therefore conical combinations and conical hulls are in fact convex conical combinations and convex conical hulls respectively moreover the above remark about dividing the coefficients while discarding the origin implies that the conical combinations and hulls may be considered as convex combinations and convex hulls in the projective space while the convex hull of a compact set is also a compact set this is not so for the conical hull first of all the latter one is unbounded moreover it is not even necessarily a closed set a counterexample is a sphere passing through the origin with the conical hull being an open halfspace plus the origin however if s is a nonempty convex compact set which does not contain the origin then the convex conical hull of s is a closed set affine combination convex combination linear combination'
- 'f a displaystyle leftsum delta frightanhfanhfa fundamental theorem of calculus ii δ [UNK] g g displaystyle delta leftsum grightg the definitions are applied to graphs as follows if a function a 0 displaystyle 0 cochain f displaystyle f is defined at the nodes of a graph a b c … displaystyle abcldots then its exterior derivative or the differential is the difference ie the following function defined on the edges of the graph 1 displaystyle 1 cochain d f a b f b − f a displaystyle leftdfrightbig abbig fbfa if g displaystyle g is a 1 displaystyle 1 cochain then its integral over a sequence of edges σ displaystyle sigma of the graph is the sum of its values over all edges of σ displaystyle sigma path integral [UNK] σ g [UNK] σ g a b displaystyle int sigma gsum sigma gbig abbig these are the properties constant rule if c displaystyle c is a constant then d c 0 displaystyle dc0 linearity if a displaystyle a and b displaystyle b are constants d a f b g a d f b d g [UNK] σ a f b g a [UNK] σ f b [UNK] σ g displaystyle dafbgadfbdgquad int sigma afbgaint sigma fbint sigma g product rule d f g f d g g d f d f d g displaystyle dfgfdggdfdfdg fundamental theorem of calculus i if a 1 displaystyle 1 chain σ displaystyle sigma consists of the edges a 0 a 1 a 1 a 2 a n − 1 a n displaystyle a0a1a1a2an1an then for any 0 displaystyle 0 cochain f displaystyle f [UNK] σ d f f a n − f a 0 displaystyle int sigma dffanfa0 fundamental theorem of calculus ii if the graph is a tree g displaystyle g is a 1 displaystyle 1 cochain and a function 0 displaystyle 0 cochain is defined on the nodes of the graph by f x [UNK] σ g displaystyle fxint sigma g where a 1 displaystyle 1 chain σ displaystyle sigma consists of a 0 a 1 a 1 a 2 a n − 1 x displaystyle a0a1a1a2an1x for some fixed a 0 displaystyle a0 then d f g displaystyle dfg see references a simplicial complex s displaystyle s is a set of simplices that satisfies the following conditions 1 every face of'
- '##2 xn of n real variables can be considered as a function on rn that is with rn as its domain the use of the real nspace instead of several variables considered separately can simplify notation and suggest reasonable definitions consider for n 2 a function composition of the following form where functions g1 and g2 are continuous if [UNK] ∈ r fx1 · is continuous by x2 [UNK] ∈ r f · x2 is continuous by x1then f is not necessarily continuous continuity is a stronger condition the continuity of f in the natural r2 topology discussed below also called multivariable continuity which is sufficient for continuity of the composition f the coordinate space rn forms an ndimensional vector space over the field of real numbers with the addition of the structure of linearity and is often still denoted rn the operations on rn as a vector space are typically defined by the zero vector is given by and the additive inverse of the vector x is given by this structure is important because any ndimensional real vector space is isomorphic to the vector space rn in standard matrix notation each element of rn is typically written as a column vector and sometimes as a row vector the coordinate space rn may then be interpreted as the space of all n × 1 column vectors or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication linear transformations from rn to rm may then be written as m × n matrices which act on the elements of rn via left multiplication when the elements of rn are column vectors and on elements of rm via right multiplication when they are row vectors the formula for left multiplication a special case of matrix multiplication is any linear transformation is a continuous function see below also a matrix defines an open map from rn to rm if and only if the rank of the matrix equals to m the coordinate space rn comes with a standard basis to see that this is a basis note that an arbitrary vector in rn can be written uniquely in the form the fact that real numbers unlike many other fields constitute an ordered field yields an orientation structure on rn any fullrank linear map of rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix if one permutes coordinates or in other words elements of the basis the resulting orientation will depend on the parity of the permutation diffeomorphisms of rn or domains in it by their virtue to avoid zero jacobian are also classified to orientationpreserving and orientationreversing it has important consequences for the theory of differential forms whose applications include electrodynamics'
|
-| 34 | - 'tethered to state and corporatesponsored science and social studies standards or fails to articulate the political necessity for widespread understanding of the unsustainable nature of modern lifestyles however ecopedagogy has tried to utilize the ongoing united nations decade of educational for sustainable development 2005 – 2015 to make strategic interventions on behalf of the oppressed using it as an opportunity to unpack and clarify the concept of sustainable development ecopedagogy scholar richard kahn describes the three main goals of the ecopedagogy movement to be creating opportunities for the proliferation of ecoliteracy programs both within schools and society bridging the gap of praxis between scholars and the public especially activists on ecopedagogical interests instigating dialogue and selfreflective solidarity across the many groups among educational left particularly in light of the existing planetary crisis angela antunes and moacir gadotti 2005 writeecopedagogy is not just another pedagogy among many other pedagogies it not only has meaning as an alternative project concerned with nature preservation natural ecology and the impact made by human societies on the natural environment social ecology but also as a new model for sustainable civilization from the ecological point of view integral ecology which implies making changes on economic social and cultural structuresaccording to social movement theorists ron ayerman and andrew jamison there are three broad dimensions of environmentally related movements cosmological technological and organizational in ecopedagogy these dimensions are outlined by richard kahn 2010 as the following the cosmological dimension focuses on how ecoliteracy ie understanding the natural systems that sustain life can transform people ’ s worldviews for example assumptions about society ’ s having the right to exploit nature can be transformed into understanding of the need for ecological balance to support society in the long term the success of such ‘ cosmological ’ thinking transformations can be assessed by the degree to which such paradigm shifts are adopted by the public the technological dimension is twofold critiquing the set of polluting technologies that have contributed to traditional development as well as some which are used or misused under the pretext of sustainable development and promoting clean technologies that do not interfere with ecological and social balance the organizational dimension emphasizes that knowledge should be of and for the people thus academics should be in dialogue with public discourse and social movements ecopedagogy is not the collection of theories or practices developed by any particular set of individuals rather akin to the world social forum and other related forms of contemporary popular education strategies it is a worldwide association of critical educators theorists nongovernmental and governmental'
- 'marshall college dr moog has used pogil materials in his teaching since 1994 and is a coauthor of pogil materials for both general and physical chemistry'
- '##mans book is informed by an advanced theoretical knowledge of scholarly research documents and their composition for example chapter 6 is about recognizing the many voices in a text the practical advises given are based on textual theory mikhail bakhtin and julia kristeva chapter 8 is titled evaluating the book as a whole the book review and the first heading is books as tools basically critical reading is related to epistemological issues hermeneutics eg the version developed by hansgeorg gadamer has demonstrated that the way we read and interpret texts is dependent on our preunderstanding and prejudices human knowledge is always an interpretative clarification of the world not a pure interestfree theory hermeneutics may thus be understood as a theory about critical reading this field was until recently associated with the humanities not with science this situation changed when thomas samuel kuhn published his book 1962 the structure of scientific revolutions which can be seen as an hermeneutic interpretation of the sciences because it conceives the scientists as governed by assumptions which are historically embedded and linguistically mediated activities organized around paradigms that direct the conceptualization and investigation of their studies scientific revolutions imply that one paradigm replaces another and introduces a new set of theories approaches and definitions according to mallery hurwitz duffy 1992 the notion of a paradigmcentered scientific community is analogous to gadamers notion of a linguistically encoded social tradition in this way hermeneutics challenge the positivist view that science can cumulate objective facts observations are always made on the background of theoretical assumptions they are theory dependent by conclusion is critical reading not just something that any scholar is able to do the way we read is partly determined by the intellectual traditions which have formed our beliefs and thinking generally we read papers within our own culture or tradition less critically compared to our reading of papers from other traditions or paradigms the psychologist cyril burt is known for his studies on the effect of heredity on intelligence shortly after he died his studies of inheritance and intelligence came into disrepute after evidence emerged indicating he had falsified research data a 1994 paper by william h tucker is illuminative on both how critical reading was performed in the discovery of the falsified data as well as in many famous psychologists noncritical reading of burts papers tucker shows that the recognized experts within the field of intelligence research blindly accepted cyril burts research even though it was without scientific value and probably directly faked they wanted to believe that iq is hereditary and considered uncritically empirical claims supporting this view this paper thus demonstrates how critical reading and the opposite'
|
-| 23 | - 'in biochemistry immunostaining is any use of an antibodybased method to detect a specific protein in a sample the term immunostaining was originally used to refer to the immunohistochemical staining of tissue sections as first described by albert coons in 1941 however immunostaining now encompasses a broad range of techniques used in histology cell biology and molecular biology that use antibodybased staining methods immunohistochemistry or ihc staining of tissue sections or immunocytochemistry which is the staining of cells is perhaps the most commonly applied immunostaining technique while the first cases of ihc staining used fluorescent dyes see immunofluorescence other nonfluorescent methods using enzymes such as peroxidase see immunoperoxidase staining and alkaline phosphatase are now used these enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy alternatively radioactive elements can be used as labels and the immunoreaction can be visualized by autoradiographytissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture inappropriate or prolonged fixation may significantly diminish the antibody binding capability many antigens can be successfully demonstrated in formalinfixed paraffinembedded tissue sections however some antigens will not survive even moderate amounts of aldehyde fixation under these conditions tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat the disadvantages of frozen sections include poor morphology poor resolution at higher magnifications difficulty in cutting over paraffin sections and the need for frozen storage alternatively vibratome sections do not require the tissue to be processed through organic solvents or high heat which can destroy the antigenicity or disrupted by freeze thawing the disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues and that chatter marks or vibratome lines are often apparent in the sectionsthe detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein crosslinks formed by fixation to uncover hidden antigenic sites this can be accomplished by heating for varying lengths of times heat induced epitope retrieval or hier or using enzyme digestion proteolytic induced epitope retrieval or pierone of the main difficulties with ihc staining is overcoming specific or nonspecific background optimisation of fixation methods and times pre'
- 'the strategic advisory group of experts sage is the principal advisory group to world health organization who for vaccines and immunization established in 1999 through the merging of two previous committees notably the scientific advisory group of experts which served the program for vaccine development and the global advisory group which served the epi program by directorgeneral of the who gro harlem brundtland it is charged with advising who on overall global policies and strategies ranging from vaccines and biotechnology research and development to delivery of immunization and its linkages with other health interventions sage is concerned not just with childhood vaccines and immunization but all vaccinepreventable diseases sage provide global recommendations on immunization policy and such recommendations will be further translated by advisory committee at the country level the sage has 15 members who are recruited and selected as acknowledged experts from around the world in the fields of epidemiology public health vaccinology paediatrics internal medicine infectious diseases immunology drug regulation programme management immunization delivery healthcare administration health economics and vaccine safety members are appointed by directorgeneral of the who to serve an initial term of 3 years and can only be renewed once sage meets at least twice annually in april and november with working groups established for detailed review of specific topics prior to discussion by the full group priorities of work and meeting agendas are developed by the group in consultation with whounicef the secretariat of the gavi alliance and who regional offices participate as observers in sage meetings and deliberations who also invites other observers to sage meetings including representatives from who regional technical advisory groups nongovernmental organizations international professional organizations technical agencies donor organizations and associations of manufacturers of vaccines and immunization technologies additional experts may be invited as appropriate to further contribute to specific agenda itemsas of december 2022 working groups were established for the following vaccines covid19 dengue ebola hpv meningococcal vaccines and vaccination pneumococcal vaccines polio vaccine programme advisory group pag for the malaria vaccine implementation programme smallpox and monkeypox vaccines national immunization technical advisory group countrylevel advisory committee'
- 'rates or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system since tumor cells are the patients own cells tumor cells however are highly abnormal and many display unusual antigens some such tumor antigens are inappropriate for the cell type or its environment monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells but are debilitating to ones health immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology which provided the first reliable source of monoclonal antibodies these advances allowed for the specific targeting of tumors both in vitro and in vivo initial research on malignant neoplasms found mab therapy of limited and generally shortlived success with blood malignancies treatment also had to be tailored to each individual patient which was impracticable in routine clinical settingsfour major antibody types that have been developed are murine chimeric humanised and human antibodies of each type are distinguished by suffixes on their name initial therapeutic antibodies were murine analogues suffix omab these antibodies have a short halflife in vivo due to immune complex formation limited penetration into tumour sites and inadequately recruit host effector functions chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications understanding of proteomics has proven essential in identifying novel tumour targetsinitially murine antibodies were obtained by hybridoma technology for which jerne kohler and milstein received a nobel prize however the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies except in some specific circumstances major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration which resulted in mild allergic reactions and sometimes anaphylactic shock hybridoma technology has been replaced by recombinant dna technology transgenic mice and phage display to reduce murine antibody immunogenicity attacks by the immune system against the antibody murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency this was initially achieved by the production of chimeric suffix ximab and humanized antibodies suffix zumab chimeric antibodies are composed of murine variable regions fused onto human constant regions taking human gene sequences from the kappa light chain and the igg1 heavy chain results in antibodies that are approximately 65 human this reduces immunogenicity and thus increases serum halflifehumanised antibodies are produced by grafting murine hypervariable regions on amino acid domains'
|
-| 12 | - 'of integers rational numbers algebraic numbers real numbers or complex numbers s 0 s 1 s 2 s 3 … displaystyle s0s1s2s3ldots written as s n n 0 ∞ displaystyle snn0infty as a shorthand satisfying a formula of the form for all n ≥ d displaystyle ngeq d where c i displaystyle ci are constants this equation is called a linear recurrence with constant coefficients of order d the order of the constantrecursive sequence is the smallest d ≥ 1 displaystyle dgeq 1 such that the sequence satisfies a formula of the above form or d 0 displaystyle d0 for the everywherezero sequence the d coefficients c 1 c 2 … c d displaystyle c1c2dots cd must be coefficients ranging over the same domain as the sequence integers rational numbers algebraic numbers real numbers or complex numbers for example for a rational constantrecursive sequence s i displaystyle si and c i displaystyle ci must be rational numbers the definition above allows eventuallyperiodic sequences such as 1 0 0 0 … displaystyle 1000ldots and 0 1 0 0 … displaystyle 0100ldots some authors require that c d = 0 displaystyle cdneq 0 which excludes such sequences the sequence 0 1 1 2 3 5 8 13 of fibonacci numbers is constantrecursive of order 2 because it satisfies the recurrence f n f n − 1 f n − 2 displaystyle fnfn1fn2 with f 0 0 f 1 1 displaystyle f00f11 for example f 2 f 1 f 0 1 0 1 displaystyle f2f1f0101 and f 6 f 5 f 4 5 3 8 displaystyle f6f5f4538 the sequence 2 1 3 4 7 11 of lucas numbers satisfies the same recurrence as the fibonacci sequence but with initial conditions l 0 2 displaystyle l02 and l 1 1 displaystyle l11 more generally every lucas sequence is constantrecursive of order 2 for any a displaystyle a and any r = 0 displaystyle rneq 0 the arithmetic progression a a r a 2 r … displaystyle aara2rldots is constantrecursive of order 2 because it satisfies s n 2 s n − 1 − s n − 2 displaystyle sn2sn1sn2 generalizing this see polynomial sequences below for any a = 0 displaystyle aneq 0'
- '##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'
- 'x nonempty subsets or counting equivalence relations on n with exactly x classes indeed for any surjective function f n → x the relation of having the same image under f is such an equivalence relation and it does not change when a permutation of x is subsequently applied conversely one can turn such an equivalence relation into a surjective function by assigning the elements of x in some manner to the x equivalence classes the number of such partitions or equivalence relations is by definition the stirling number of the second kind snx also written n x displaystyle textstyle n atop x its value can be described using a recursion relation or using generating functions but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation surjective functions from n to x for each surjective function f n → x its orbit under permutations of x has x elements since composition on the left with two distinct permutations of x never gives the same function on n the permutations must differ at some element of x which can always be written as fi for some i ∈ n and the compositions will then differ at i it follows that the number for this case is x times the number for the previous case that is x n x displaystyle textstyle xn atop x example x a b n 1 2 3 then displaystyle xabn123text then a a b a b a a b b b a a b a b b b a 2 3 2 2 × 3 6 displaystyle leftvert aababaabbbaababbbarightvert 2left3 atop 2right2times 36 functions from n to x up to a permutation of x this case is like the corresponding one for surjective functions but some elements of x might not correspond to any equivalence class at all since one considers functions up to a permutation of x it does not matter which elements are concerned just how many as a consequence one is counting equivalence relations on n with at most x classes and the result is obtained from the mentioned case by summation over values up to x giving [UNK] k 0 x n k displaystyle textstyle sum k0xn atop k in case x ≥ n the size of x poses no restriction at all and one is counting all equivalence relations on a set of n elements equivalently all partitions of such a set therefore [UNK] k 0 n n k displaystyle textstyle sum k0nn atop k gives an expression for the bell number bn surjective functions from n to x'
|
-| 31 | - 'are real but the future is not until einsteins reinterpretation of the physical concepts associated with time and space in 1907 time was considered to be the same everywhere in the universe with all observers measuring the same time interval for any event nonrelativistic classical mechanics is based on this newtonian idea of time einstein in his special theory of relativity postulated the constancy and finiteness of the speed of light for all observers he showed that this postulate together with a reasonable definition for what it means for two events to be simultaneous requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer the theory of special relativity finds a convenient formulation in minkowski spacetime a mathematical structure that combines three dimensions of space with a single dimension of time in this formalism distances in space can be measured by how long light takes to travel that distance eg a lightyear is a measure of distance and a meter is now defined in terms of how far light travels in a certain amount of time two events in minkowski spacetime are separated by an invariant interval which can be either spacelike lightlike or timelike events that have a timelike separation cannot be simultaneous in any frame of reference there must be a temporal component and possibly a spatial one to their separation events that have a spacelike separation will be simultaneous in some frame of reference and there is no frame of reference in which they do not have a spatial separation different observers may calculate different distances and different time intervals between two events but the invariant interval between the events is independent of the observer and his or her velocity unlike space where an object can travel in the opposite directions and in 3 dimensions time appears to have only one dimension and only one direction – the past lies behind fixed and immutable while the future lies ahead and is not necessarily fixed yet most laws of physics allow any process to proceed both forward and in reverse there are only a few physical phenomena that violate the reversibility of time this time directionality is known as the arrow of time acknowledged examples of the arrow of time are radiative arrow of time manifested in waves eg light and sound travelling only expanding rather than focusing in time see light cone entropic arrow of time according to the second law of thermodynamics an isolated system evolves toward a larger disorder rather than orders spontaneously quantum arrow time which is related to irreversibility of measurement in quantum mechanics according to the copenhagen interpretation of quantum mechanics weak arrow of time preference for a certain time direction of weak force in'
- 'presented is as easy to understand as possible although illuminating a branch of mathematics is the purpose of textbooks rather than the mathematical theory they might be written to cover a theory can be either descriptive as in science or prescriptive normative as in philosophy the latter are those whose subject matter consists not of empirical data but rather of ideas at least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation a field of study is sometimes named a theory because its basis is some initial set of assumptions describing the fields approach to the subject these assumptions are the elementary theorems of the particular theory and can be thought of as the axioms of that field some commonly known examples include set theory and number theory however literary theory critical theory and music theory are also of the same form one form of philosophical theory is a metatheory or metatheory a metatheory is a theory whose subject matter is some other theory or set of theories in other words it is a theory about theories statements made in the metatheory about the theory are called metatheorems a political theory is an ethical theory about the law and government often the term political theory refers to a general view or specific ethic political belief or attitude thought about politics in social science jurisprudence is the philosophical theory of law contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a particular social institution most of the following are scientific theories some are not but rather encompass a body of knowledge or art such as music theory and visual arts theories anthropology carneiros circumscription theory astronomy alpher – bethe – gamow theory — b2fh theory — copernican theory — newtons theory of gravitation — hubbles law — keplers laws of planetary motion ptolemaic theory biology cell theory — chemiosmotic theory — evolution — germ theory — symbiogenesis chemistry molecular theory — kinetic theory of gases — molecular orbital theory — valence bond theory — transition state theory — rrkm theory — chemical graph theory — flory – huggins solution theory — marcus theory — lewis theory successor to brønsted – lowry acid – base theory — hsab theory — debye – huckel theory — thermodynamic theory of polymer elasticity — reptation theory — polymer field theory — møller – plesset perturbation theory — density functional theory — frontier molecular orbital theory — polyhedral skeletal electron pair theory — baeyer strain theory — quantum theory of'
- 'largely agreed with parmenidess reasoning on nothing aristotle differs with parmenidess conception of nothing and says although these opinions seem to follow logically in a dialectical discussion yet to believe them seems next door to madness when one considers the factsin modern times albert einsteins concept of spacetime has led many scientists including einstein himself to adopt a position remarkably similar to parmenides on the death of his friend michele besso einstein consoled his widow with the words now he has departed from this strange world a little ahead of me that signifies nothing for those of us that believe in physics the distinction between past present and future is only a stubbornly persistent illusion leucippus leucippus early 5th century bc one of the atomists along with other philosophers of his time made attempts to reconcile this monism with the everyday observation of motion and change he accepted the monist position that there could be no motion without a void the void is the opposite of being it is notbeing on the other hand there exists something known as an absolute plenum a space filled with matter and there can be no motion in a plenum because it is completely full but there is not just one monolithic plenum for existence consists of a multiplicity of plenums these are the invisibly small atoms of greek atomist theory later expanded by democritus c 460 – 370 bc which allows the void to exist between them in this scenario macroscopic objects can comeintobeing move through space and pass into notbeing by means of the coming together and moving apart of their constituent atoms the void must exist to allow this to happen or else the frozen world of parmenides must be accepted bertrand russell points out that this does not exactly defeat the argument of parmenides but rather ignores it by taking the rather modern scientific position of starting with the observed data motion etc and constructing a theory based on the data as opposed to parmenides attempts to work from pure logic russell also observes that both sides were mistaken in believing that there can be no motion in a plenum but arguably motion cannot start in a plenum cyril bailey notes that leucippus is the first to say that a thing the void might be real without being a body and points out the irony that this comes from a materialistic atomist leucippus is therefore the first to say that nothing has a reality attached to it aristotle newton descartes aristotle 384 – 322 bc provided the classic escape from the logical problem posed by parmenides by distinguishing things that'
|
-| 38 | - 'in sociolinguistics prestige is the level of regard normally accorded a specific language or dialect within a speech community relative to other languages or dialects prestige varieties are language or dialect families which are generally considered by a society to be the most correct or otherwise superior in many cases they are the standard form of the language though there are exceptions particularly in situations of covert prestige where a nonstandard dialect is highly valued in addition to dialects and languages prestige is also applied to smaller linguistic features such as the pronunciation or usage of words or grammatical constructs which may not be distinctive enough to constitute a separate dialect the concept of prestige provides one explanation for the phenomenon of variation in form among speakers of a language or languagesthe presence of prestige dialects is a result of the relationship between the prestige of a group of people and the language that they use generally the language or variety that is regarded as more prestigious in that community is the one used by the more prestigious group the level of prestige a group has can also influence whether the language that they speak is considered its own language or a dialect implying that it does not have enough prestige to be considered its own language social class has a correlation with the language that is considered more prestigious and studies in different communities have shown that sometimes members of a lower social class attempt to emulate the language of individuals in higher social classes to avoid how their distinct language would otherwise construct their identity the relationship between language and identity construction as a result of prestige influences the language used by different individuals depending on which groups they do belong or want to belong sociolinguistic prestige is especially visible in situations where two or more distinct languages are used and in diverse socially stratified urban areas in which there are likely to be speakers of different languages andor dialects interacting often the result of language contact depends on the power relationship between the languages of the groups that are in contact the prevailing view among contemporary linguists is that regardless of perceptions that a dialect or language is better or worse than its counterparts when dialects and languages are assessed on purely linguistic grounds all languages — and all dialects — have equal meritadditionally which varieties registers or features will be considered more prestigious depends on audience and context there are thus the concepts of overt and covert prestige overt prestige is related to standard and formal language features and expresses power and status covert prestige is related more to vernacular and often patois and expresses solidarity community and group identity more than authority prestige varieties are those that are regarded mostly highly within a society as such the standard language the form promoted by authorities — usually governmental or from those in power — and considered'
- 'english elements engaged in the codeswitching process are mostly of one or two words in length and are usually content words that can fit into the surrounding cantonese phrase fairly easily like nouns verbs adjectives and occasionally adverbs examples include [UNK] canteen 食 [UNK] heoi3 ken6tin1 sik6 faan6 go to the canteen for lunch [UNK] [UNK] [UNK] press [UNK] hou2 do1 je5 pet1 si4 nei5 a lot of things press you 我 [UNK] sure ngo5 m4 su1aa4 im not sure [UNK] 我 check 一 check [UNK] bong1 ngo5 cek1 jat1 cek1 aa1 help me searchcheck for itmeanwhile structure words like determiners conjunctions and auxiliary verbs almost never appear alone in the predominantly cantonese discourse which explains the ungrammaticality of two [UNK] does not make sense but literally means two parts english lexical items on the other hand are frequently assimilated into cantonese grammar for instance [UNK] part loeng5 paat1 two parts part would lose its plural morpheme s as do its counterpart in cantonese equip [UNK] ji6 kwip1 zo2 equipped equip is followed by a cantonese perfective aspect marker a more evident case of the syntactic assimilation would be where a negation marker is inserted into an english compound adjective or verb to form yes – no questions in cantonese [UNK] [UNK] [UNK] [UNK] 愛 [UNK] ? keoi5 ho2 m4 ho2 oi3 aa3 is shehe lovely is pure cantonese while a sentence like [UNK] cu [UNK] cute [UNK] ? keoi5 kiu1 m4 cute aa3 is heshe cute is a typical example of the assimilationfor english elements consisting of two words or more they generally retain english grammar internally without disrupting the surrounding cantonese grammar for example [UNK] [UNK] [UNK] [UNK] parttime job [UNK] m5 sai2 zoi3 wan2 paat1 taam1 zop1 laa3 you dont need to look for a parttime job againexamples are taken from the same source the first major framework dichotomises motivations of codeswitching in hong kong into expedient mixing and orientational mixing for expedient mixing the speaker would turn to english eg form if the correspondent low cantonese expression is not available and the existing high cantonese expression eg [UNK] [UNK] biu2 gaak3 sounds too formal in the case of orientational mixing despite the presence of both high and low expression eg for barbecue there exists both [UNK] [UNK] siu1'
- 'the participants with less dominant participants generally being more attentive to more dominant participants ’ words an opposition between urban and suburban linguistic variables is common to all metropolitan regions of the united states although the particular variables distinguishing urban and suburban styles may differ from place to place the trend is for urban styles to lead in the use of nonstandard forms and negative concord in penny eckerts study of belten high in the detroit suburbs she noted a stylistic difference between two groups that she identified schooloriented jocks and urbanoriented schoolalienated burnouts the variables she analyzed were the usage of negative concord and the mid and low vowels involved in the northern cities shift which consists of the following changes æ ea a æ ə a ʌ ə ay oy and ɛ ʌ y here is equivalent to the ipa symbol j all of these changes are urbanled as is the use of negative concord the older mostly stabilized changes æ ea a æ and ə a were used the most by women while the newer changes ʌ ə ay oy and ɛ ʌ were used the most by burnouts eckert theorizes that by using an urban variant such as foyt they were not associating themselves with urban youth rather they were trying to index traits that were associated with urban youth such as tough and streetsmart this theory is further supported by evidence from a subgroup within the burnout girls which eckert refers to as ‘ burnedout ’ burnout girls she characterizes this group as being even more antiestablishment than the ‘ regular ’ burnout girls this subgroup led overall in the use of negative concord as well as in femaleled changes this is unusual because negative concord is generally used the most by males ‘ burnedout ’ burnout girls were not indexing masculinity — this is shown by their use of femaleled variants and the fact that they were found to express femininity in nonlinguistic ways this shows that linguistic variables may have different meanings in the context of different styles there is some debate about what makes a style gay in stereotypically flamboyant gay speech the phonemes s and l have a greater duration people are also more likely to identify those with higher frequency ranges as gayon the other hand there are many different styles represented within the gay community there is much linguistic variation in the gay community and each subculture appears to have its own distinct features according to podesva et al gay culture encompasses reified categories such as leather daddies clones drag queens circuit boys guppies gay yuppies gay prostitutes and activists'
|
-| 6 | - '##c vec xi vec xi prime sigma vec xi prime vec xi vec xi prime 2d2xi prime as shown in the diagram on the right the difference between the unlensed angular position β → displaystyle vec beta and the observed position θ → displaystyle vec theta is this deflection angle reduced by a ratio of distances described as the lens equation β → θ → − α → θ → θ → − d d s d s α → d d θ → displaystyle vec beta vec theta vec alpha vec theta vec theta frac ddsdsvec hat alpha vec ddtheta where d d s displaystyle dds is the distance from the lens to the source d s displaystyle ds is the distance from the observer to the source and d d displaystyle dd is the distance from the observer to the lens for extragalactic lenses these must be angular diameter distances in strong gravitational lensing this equation can have multiple solutions because a single source at β → displaystyle vec beta can be lensed into multiple images the reduced deflection angle α → θ → displaystyle vec alpha vec theta can be written as α → θ → 1 π [UNK] d 2 θ ′ θ → − θ → ′ κ θ → ′ θ → − θ → ′ 2 displaystyle vec alpha vec theta frac 1pi int d2theta prime frac vec theta vec theta prime kappa vec theta prime vec theta vec theta prime 2 where we define the convergence κ θ → σ θ → σ c r displaystyle kappa vec theta frac sigma vec theta sigma cr and the critical surface density not to be confused with the critical density of the universe σ c r c 2 d s 4 π g d d s d d displaystyle sigma crfrac c2ds4pi gddsdd we can also define the deflection potential ψ θ → 1 π [UNK] d 2 θ ′ κ θ → ′ ln θ → − θ → ′ displaystyle psi vec theta frac 1pi int d2theta prime kappa vec theta prime ln vec theta vec theta prime such that the scaled deflection angle is just the gradient of the potential and the convergence is half the laplacian of the potential θ → − β → α → θ → ∇ → ψ θ → displaystyle vec theta vec beta vec alpha vec theta vec nabla psi vec theta κ θ → 1 2 ∇ 2 ψ'
- 'scattering cils or raman process also exists which is well studied and is in many ways completely analogous to cia and cie cils arises from interactioninduced polarizability increments of molecular complexes the excess polarizability of a complex relative the sum of polarizabilities of the noninteracting molecules molecules interact at close range through intermolecular forces the van der waals forces which cause minute shifts of the electron density distributions relative the distributions of electrons when the molecules are not interacting intermolecular forces are repulsive at near range where electron exchange forces dominate the interaction and attractive at somewhat greater separations where the dispersion forces are active if separations are further increased all intermolecular forces fall off rapidly and may be totally neglected repulsion and attraction are due respectively to the small defects or excesses of electron densities of molecular complexes in the space between the interacting molecules which often result in interactioninduced electric dipole moments that contribute some to interactioninduced emission and absorption intensities the resulting dipoles are referred to as exchange forceinduced dipole and dispersion forceinduced dipoles respectively other dipole induction mechanisms also exist in molecular as opposed to monatomic gases and in mixtures of gases when molecular gases are present molecules have centers of positive charge the nuclei which are surrounded by a cloud of electrons molecules thus may be thought of being surrounded by various electric multipolar fields which will polarize any collisional partner momentarily in a flyby encounter generating the socalled multipoleinduced dipoles in diatomic molecules such as h2 and n2 the lowestorder multipole moment is the quadrupole followed by a hexadecapole etc hence the quadrupoleinduced hexadecapoleinduced dipoles especially the former is often the strongest most significant of the induced dipoles contributing to cia and cie other induced dipole mechanisms exist in collisional systems involving molecules of three or more atoms co2 ch4 collisional frame distortion may be an important induction mechanism collisioninduced emission and absorption by simultaneous collisions of three or more particles generally do involve pairwiseadditive dipole components as well as important irreducible dipole contributions and their spectra collisioninduced absorption was first reported in compressed oxygen gas in 1949 by harry welsch and associates at frequencies of the fundamental band of the o2 molecule note that an unperturbed o2 molecule like all other diatomic homonuclear molecules'
- 'the firehose instability or hosepipe instability is a dynamical instability of thin or elongated galaxies the instability causes the galaxy to buckle or bend in a direction perpendicular to its long axis after the instability has run its course the galaxy is less elongated ie rounder than before any sufficiently thin stellar system in which some component of the internal velocity is in the form of random or counterstreaming motions as opposed to rotation is subject to the instability the firehose instability is probably responsible for the fact that elliptical galaxies and dark matter haloes never have axis ratios more extreme than about 31 since this is roughly the axis ratio at which the instability sets in it may also play a role in the formation of barred spiral galaxies by causing the bar to thicken in the direction perpendicular to the galaxy diskthe firehose instability derives its name from a similar instability in magnetized plasmas however from a dynamical point of view a better analogy is with the kelvin – helmholtz instability or with beads sliding along an oscillating string the firehose instability can be analyzed exactly in the case of an infinitely thin selfgravitating sheet of stars if the sheet experiences a small displacement h x t displaystyle hxt in the z displaystyle z direction the vertical acceleration for stars of x displaystyle x velocity u displaystyle u as they move around the bend is a z ∂ ∂ t u ∂ ∂ x 2 h ∂ 2 h ∂ t 2 2 u ∂ 2 h ∂ t ∂ x u 2 ∂ 2 h ∂ x 2 displaystyle azleftpartial over partial tupartial over partial xright2hpartial 2h over partial t22upartial 2h over partial tpartial xu2partial 2h over partial x2 provided the bend is small enough that the horizontal velocity is unaffected averaged over all stars at x displaystyle x this acceleration must equal the gravitational restoring force per unit mass f x displaystyle fx in a frame chosen such that the mean streaming motions are zero this relation becomes ∂ 2 h ∂ t 2 σ u 2 ∂ 2 h ∂ x 2 − f z x t 0 displaystyle partial 2h over partial t2sigma u2partial 2h over partial x2fzxt0 where σ u displaystyle sigma u is the horizontal velocity dispersion in that frame for a perturbation of the form h x t h exp i k x − ω t displaystyle hxthexp leftmathrm i leftkxomega trightright the gravitational restoring force is f z x'
|
-| 18 | - 'the american institute of graphic arts aiga is a professional organization for design its members practice all forms of communication design including graphic design typography interaction design user experience branding and identity the organizations aim is to be the standard bearer for professional ethics and practices for the design profession there are currently over 25000 members and 72 chapters and more than 200 student groups around the united states in 2005 aiga changed its name to “ aiga the professional association for design ” dropping the american institute of graphic arts to welcome all design disciplines aiga aims to further design disciplines as professions as well as cultural assets as a whole aiga offers opportunities in exchange for creative new ideas scholarly research critical analysis and education advancement in 1911 frederic goudy alfred stieglitz and w a dwiggins came together to discuss the creation of an organization that was committed to individuals passionate about communication design in 1913 president of the national arts club john g agar announced the formation of the american institute of graphic arts during the eighth annual exhibition of “ the books of the year ” the national arts club was instrumental in the formation of aiga in that they helped to form the committee to plan to organize the organization the committee formed included charles dekay and william b howland and officially formed the american institute of graphic arts in 1914 howland publisher and editor of the outlook was elected president the goal of the group was to promote excellence in the graphic design profession through its network of local chapters throughout the countryin 1920 aiga began awarding medals to individuals who have set standards of excellence over a lifetime of work or have made individual contributions to innovation within the practice of design winners have been recognized for design teaching writing or leadership of the profession and may honor individuals posthumouslyin 1982 the new york chapter was formed and the organization began creating local chapters to decentralize leadershiprepresented by washington dc arts advocate and attorney james lorin silverberg esq the washington dc chapter of aiga was organized as the american institute of graphic arts incorporated washington dc on september 6 1984 the aiga in collaboration with the us department of transportation produced 50 standard symbols to be used on signs in airports and other transportation hubs and at large international events the first 34 symbols were published in 1974 receiving a presidential design award the remaining 16 designs were added in 1979 in 2012 aiga replaced all its competitions with a single competition called cased formerly called justified the stated aim of the competition is to demonstrate the collective success and impact of the design profession by celebrating the best in contemporary design through case studies between 1941 and 2011 aiga sponsored a juried contest for the 50 best designed'
- 'a vignette in graphic design is a french loanword meaning a unique form for a frame to an image either illustration or photograph rather than the images edges being rectilinear it is overlaid with decorative artwork featuring a unique outline this is similar to the use of the word in photography where the edges of an image that has been vignetted are nonlinear or sometimes softened with a mask – often a darkroom process of introducing a screen an oval vignette is probably the most common example originally a vignette was a design of vineleaves and tendrils vignette small vine in french the term was also used for a small embellishment without border in what otherwise would have been a blank space such as that found on a titlepage a headpiece or tailpiece the use in modern graphic design is derived from book publishing techniques dating back to the middle ages analytical bibliography ca 1450 to 1800 when a vignette referred to an engraved design printed using a copperplate press on a page that has already been printed on using a letter press printing press vignettes are sometimes distinguished from other intext illustrations printed on a copperplate press by the fact that they do not have a border such designs usually appear on titlepages only woodcuts which are printed on a letterpress and are also used to separate sections or chapters are identified as a headpiece tailpiece or printers ornament depending on shape and position calligraphy another conjunction of text and decoration curlicues flourishes in the arts usually composed of concentric circles often used in calligraphy scrollwork general name for scrolling abstract decoration used in many areas of the visual arts'
- 'archibald winterbottom was a british cotton cloth merchant who is best known for becoming the largest producer of bookcloth and tracing cloth in the world bookcloth became the dominant bookbinding material in the early 19th century which was much cheaper and easier to work with than leather revolutionising the manufacture and distribution of books winterbottom was born in linthwaite in the heart of the west riding of yorkshire the son of a third generation wool cloth merchant william whitehead winterbottom 1771 – 1842 and isabella nee dickson 1784 – 1849 not long after the family moved to the civil parish of saddleworth where winterbottom at the age of 15 left home in search of his fortune he reportedly promised his father that when he obtained a position he would “ do his utmost to succeed ” in 1829 winterbottom is said to have walked the 12 miles to manchester presumably seeking an apprenticeship beginning his working life as a clerk with the largest cotton merchants in manchester henry bannerman sons he remained with bannermans for the next twentythree years where he learned how to refine cloth to the highest degree and developed different finishes that could be applied to plain cloth at the age of nineteen he was appointed to manage their bradford accounts and to run their silesia department patenting a silvery finish lining which became known as dacians winterbottom was made a partner at bannermans aged thirty which he held for the next nine years manchester was at the heart of the cotton industry in britain during the 19th century which was a labourintensive sector at a time when half of the workforce were children in 1845 winterbottom married helen woolley whose family came from a unitarian tradition at the same time he became actively involved in the lancashire public school association lpsa founded in 1847 which was dominated by unitarians by 1852 winterbottom formed part of a delegation of the national public school association npa to present a draft bill to lord john russell at 10 downing street for the establishment of nondenominational free schools in england and wales ” he remained active within the npa listed as secretary to the general committee on education in 1857 but by 1862 the npa had achieved some of what it had set out to achieve and was dissolved winterbottom went on to work with the newly formed manchester educational aid society campaigning for compulsory primary education he spent the rest of his life actively involved in improving child welfare creating new schools and changing legislation to protect children by 1851 winterbottom had a successful career working at henry bannerman sons living in a prosperous neighbourhood in the northwest of manchester he had been gaining experience in working the machinery needed to'
|
-| 14 | - 'general anesthesia were enough to anesthetise the fetus all fetuses would be born sleepy after a cesarean section performed in general anesthesia which is not the case dr carlo v bellieni also agrees that the anesthesia that women receive for fetal surgery is not sufficient to anesthetize the fetus in 1985 questions about fetal pain were raised during congressional hearings concerning the silent screamin 2013 during the 113th congress representative trent franks introduced a bill called the paincapable unborn child protection act hr 1797 it passed in the house on june 18 2013 and was received in the us senate read twice and referred to the judiciary committeein 2004 during the 108th congress senator sam brownback introduced a bill called the unborn child pain awareness act for the stated purpose of ensuring that women seeking an abortion are fully informed regarding the pain experienced by their unborn child which was read twice and referred to committee subsequently 25 states have examined similar legislation related to fetal pain andor fetal anesthesia and in 2010 nebraska banned abortions after 20 weeks on the basis of fetal pain eight states – arkansas georgia louisiana minnesota oklahoma alaska south dakota and texas – have passed laws which introduced information on fetal pain in their stateissued abortioncounseling literature which one opponent of these laws the guttmacher institute founded by planned parenthood has called generally irrelevant and not in line with the current medical literature arthur caplan director of the center for bioethics at the university of pennsylvania said laws such as these reduce the process of informed consent to the reading of a fixed script created and mandated by politicians not doctors pain in babies prenatal development texas senate bill 5'
- 'somitogenesis is the process by which somites form somites are bilaterally paired blocks of paraxial mesoderm that form along the anteriorposterior axis of the developing embryo in segmented animals in vertebrates somites give rise to skeletal muscle cartilage tendons endothelium and dermis in somitogenesis somites form from the paraxial mesoderm a particular region of mesoderm in the neurulating embryo this tissue undergoes convergent extension as the primitive streak regresses or as the embryo gastrulates the notochord extends from the base of the head to the tail with it extend thick bands of paraxial mesodermas the primitive streak continues to regress somites form from the paraxial mesoderm by budding off rostrally as somitomeres or whorls of paraxial mesoderm cells compact and separate into discrete bodies the periodic nature of these splitting events has led many to say to that somitogenesis occurs via a clockwavefront model in which waves of developmental signals cause the periodic formation of new somites these immature somites then are compacted into an outer layer the epithelium and an inner mass the mesenchyme the somites themselves are specified according to their location as the segmental paraxial mesoderm from which they form it itself determined by position along the anteriorposterior axis before somitogenesis the cells within each somite are specified based on their location within the somite in addition they retain the ability to become any kind of somitederived structure until relatively late in the process of somitogenesis once the cells of the presomitic mesoderm are in place following cell migration during gastrulation oscillatory expression of many genes begins in these cells as if regulated by a developmental clock as mentioned previously this has led many to conclude that somitogenesis is coordinated by a clock and wave mechanism in technical terms this means that somitogenesis occurs due to the largely cellautonomous oscillations of a network of genes and gene products which causes cells to oscillate between a permissive and a nonpermissive state in a consistently timedfashion like a clock these genes include members of the fgf family wnt and notch pathway as well as targets of these pathways the wavefront progress slowly in a posteriortoanterior direction as the wavefront'
- 'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'
|
-| 11 | - 'the chest wall this is a noninvasive highly accurate and quick assessment of the overall function of the heart tte utilizes several windows to image the heart from different perspectives each window has advantages and disadvantages for viewing specific structures within the heart and typically numerous windows are utilized within the same study to fully assess the heart parasternal long and parasternal short axis windows are taken next to the sternum the apical twothreefour chamber windows are taken from the apex of the heart lower left side and the subcostal window is taken from underneath the edge of the last rib tte utilizes one m mode two and threedimensional ultrasound time is implicit and not included from the different windows these can be combined with pulse wave or continuous wave doppler to visualize the velocity of blood flow and structure movements images can be enhanced with contrast that are typically some sort of micro bubble suspension that reflect the ultrasound waves a transesophageal echocardiogram is an alternative way to perform an echocardiogram a specialized probe containing an ultrasound transducer at its tip is passed into the patients esophagus via the mouth allowing image and doppler evaluation from a location directly behind the heart it is most often used when transthoracic images are suboptimal and when a clearer and more precise image is needed for assessment this test is performed in the presence of a cardiologist anesthesiologist registered nurse and ultrasound technologist conscious sedation andor localized numbing medication may be used to make the patient more comfortable during the procedure tee unlike tte does not have discrete windows to view the heart the entire esophagus and stomach can be utilized and the probe advanced or removed along this dimension to alter the perspective on the heart most probes include the ability to deflect the tip of the probe in one or two dimensions to further refine the perspective of the heart additionally the ultrasound crystal is often a twodimension crystal and the ultrasound plane being used can be rotated electronically to permit an additional dimension to optimize views of the heart structures often movement in all of these dimensions is needed tee can be used as standalone procedures or incorporated into catheter or surgicalbased procedures for example during a valve replacement surgery the tee can be used to assess the valve function immediately before repairreplacement and immediately after this permits revising the valve midsurgery if needed to improve outcomes of the surgery a stress echocardiogram also known as a stress echo uses ultrasound imaging of the heart to'
- 'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'
- '##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'
|
-| 41 | - 'survey placename datathe ons has produced census results from urban areas since 1951 since 1981 based upon the extent of irreversible urban development indicated on ordnance survey maps the definition is an extent of at least 20 ha and at least 1500 census residents separate areas are linked if less than 200 m 220 yd apart included are transportation features the uk has five urban areas with a population over a million and a further sixty nine with a population over one hundred thousand australia the australian bureau of statistics refers to urban areas as urban centres which it generally defines as population clusters of 1000 or more people australia is one of the most urbanised countries in the world with more than 50 of the population residing in australias three biggest urban centres new zealand statistics new zealand defines urban areas in new zealand which are independent of any administrative subdivisions and have no legal basis there are four classes of urban area major urban areas population 100000 large urban areas population 30000 – 99999 medium urban areas population 10000 – 29999 and small urban areas population 1000 – 9999 as of 2021 there are 7 major urban areas 13 large urban areas 22 medium urban areas and 136 small urban areas urban areas are reclassified after each new zealand census so population changes between censuses does not change an urban areas classification canada according to statistics canada an urban area in canada is an area with a population of at least 1000 people where the density is no fewer than 400 persons per square kilometre 1000sq mi if two or more urban areas are within 2 km 12 mi of each other by road they are merged into a single urban area provided they do not cross census metropolitan area or census agglomeration boundariesin the canada 2011 census statistics canada redesignated urban areas with the new term population centre the new term was chosen in order to better reflect the fact that urban vs rural is not a strict division but rather a continuum within which several distinct settlement patterns may exist for example a community may fit a strictly statistical definition of an urban area but may not be commonly thought of as urban because it has a smaller population or functions socially and economically as a suburb of another urban area rather than as a selfcontained urban entity or is geographically remote from other urban communities accordingly the new definition set out three distinct types of population centres small population 1000 to 29999 medium population 30000 to 99999 and large population 100000 or greater despite the change in terminology however the demographic definition of a population centre remains unchanged from that of an urban area a population of at least 1000 people where the density is no fewer than 400 persons per km2 mexico mexico'
- 'neighbourhoods green is an english partnership initiative which works with social landlords and housing associations to highlight the importance of open and green space for residents and raise the overall quality of design and management with these groups the partnership was established in 2003 when peabody trust and notting hill housing group held a conference which identified the need to raise the profile of the green and open spaces owned and managed by social landlords the scheme attracted praise from the then minister for parks and green spaces yvette coopersince 2003 the partnership has expanded to include national housing federation groundwork the wildlife trusts landscape institute green flag award royal horticultural society natural england and cabe it is overseen by a steering group which includes representatives from circle housing group great places housing group helena homes london borough of hammersmith fulham medina housing new charter housing trust notting hill housing peabody trust places for people regenda group and wakefield district housing neighbourhoods green has three main areas of emphasis it produces best practice guidance highlighting the contribution parks gardens and play areas make to the quality of life for residents – including the mitigation of climate change promotion of biodiversity and aesthetic qualities it also generates a number of case studies from housing associations and community groups and offers training for landlords residents and partners on areas such as playspace green infrastructure and growing foodin 2011 working in conjunction with university of sheffield and the national housing federation neighbourhoods green produced greener neighbourhoods a best practice guide to managing green space for social housing its ten principles for housing green space were commit to quality involve residents know the bigger picture make the best use of funding design for local people develop training and skills maintain high standards make places feel safe promote healthy living prepare for climate changeduring 201314 neighbourhoods green will be working with keep britain tidy to support the expansion of the green flag award into the social housing sector'
- 'matrix planning methodology was set in place the ct method principles are the foundation of the design implementation and management of this metropolitan plan'
|
-| 22 | - 'time of concentration is a concept used in hydrology to measure the response of a watershed to a rain event it is defined as the time needed for water to flow from the most remote point in a watershed to the watershed outlet it is a function of the topography geology and land use within the watershed a number of methods can be used to calculate time of concentration including the kirpich 1940 and nrcs 1997 methods time of concentration is useful in predicting flow rates that would result from hypothetical storms which are based on statistically derived return periods through idf curves for many often economic reasons it is important for engineers and hydrologists to be able to accurately predict the response of a watershed to a given rain event this can be important for infrastructure development design of bridges culverts etc and management as well as to assess flood risk such as the arkstormscenario this image shows the basic principle which leads to determination of the time of concentration much like a topographic map showing lines of equal elevation a map with isolines can be constructed to show locations with the same travel time to the watershed outlet in this simplified example the watershed outlet is located at the bottom of the picture with a stream flowing through it moving up the map we can say that rainfall which lands on all of the places along the first yellow line will reach the watershed outlet at exactly the same time this is true for every yellow line with each line further away from the outlet corresponding to a greater travel time for runoff traveling to the outlet furthermore as this image shows the spatial representation of travel time can be transformed into a cumulative distribution plot detailing how travel times are distributed throughout the area of the watershed'
- 'equation d s t d t displaystyle dstdt describes how the soil saturation changes over time the terms on the right hand side describe the rates of rainfall r displaystyle r interception i displaystyle i runoff q displaystyle q evapotranspiration e displaystyle e and leakage l displaystyle l these are typically given in millimeters per day mmd runoff evaporation and leakage are all highly dependent on the soil saturation at a given time in order to solve the equation the rate of evapotranspiration as a function of soil moisture must be known the model generally used to describe it states that above a certain saturation evaporation will only be dependent on climate factors such as available sunlight once below this point soil moisture imposes controls on evapotranspiration and it decreases until the soil reaches the point where the vegetation can no longer extract any more water this soil level is generally referred to as the permanent wilting point use of this term can lead to confusion because many plant species do not actually wilt the damkohler number is a unitless ratio that predicts whether the duration in which a particular nutrient or solute is in specific pool or flux of water will be sufficient time for a specific reaction to occur d a f r a c t t r a n s p o r t t r e a c t i o n displaystyle dafracttransporttreaction where t is the time of either the transport or the reaction transport time can be substituted for t exposure to determine if a reaction can realistically occur depending on during how much of the transport time the reactant will be exposed to the correct conditions to react a damkohler number greater than 1 signifies that the reaction has time to react completely whereas the opposite is true for a damkohler number less than 1 darcys law is an equation that describes the flow of a fluid through a porous medium the law was formulated by henry darcy in the early 1800s when he was charged with the task to bring water through an aquifer to the town of dijon france henry conducted various experiments on the flow of water through beds of sand to derive the equation q − k a x f r a c h l displaystyle qkaxfrachl where q is discharge measured in m3sec k is hydraulic conductivity ms a is cross sectional area that the water travels m2 where h is change in height over the gradual distance of the aquifer m where l is the length of the aquifer or distance the water'
- '##s power extended even to the high water mark and into the main streamsin the united states the high water mark is also significant because the united states constitution gives congress the authority to legislate for waterways and the high water mark is used to determine the geographic extent of that authority federal regulations 33 cfr 3283e define the ordinary high water mark ohwm as that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear natural line impressed on the bank shelving changes in the character of soil destruction of terrestrial vegetation the presence of litter and debris or other appropriate means that consider the characteristics of the surrounding areas for the purposes of section 404 of the clean water act the ohwm defines the lateral limits of federal jurisdiction over nontidal water bodies in the absence of adjacent wetlands for the purposes of sections 9 and 10 of the rivers and harbors act of 1899 the ohwm defines the lateral limits of federal jurisdiction over traditional navigable waters of the us the ohwm is used by the united states army corps of engineers the united states environmental protection agency and other federal agencies to determine the geographical extent of their regulatory programs likewise many states use similar definitions of the ohwm for the purposes of their own regulatory programs in 2016 the court of appeals of indiana ruled that land below the ohwm as defined by common law along lake michigan is held by the state in trust for public use chart datum mean high water measuring storm surge terrace geology benches left by lakes wash margin'
|
-| 35 | - 'field would be elevated levels of bicarbonate hco−3 sodium and silica ions in the water runoff the breakdown of carbonate minerals caco 3 h 2 co 3 [UNK] − − [UNK] ca 2 2 hco 3 − displaystyle ce caco3 h2co3 ca2 2 hco3 caco 3 [UNK] − − [UNK] ca 2 co 3 2 − displaystyle ce caco3 ca2 co32 the further dissolution of carbonic acid h2co3 and bicarbonate hco−3 produces co2 gas oxidization is also a major contributor to the breakdown of many silicate minerals and formation of secondary minerals diagenesis in the early soil profile oxidation of olivine femgsio4 releases fe mg and si ions the mg is soluble in water and is carried in the runoff but the fe often reacts with oxygen to precipitate fe2o3 hematite the oxidized state of iron oxide sulfur a byproduct of decaying organic material will also react with iron to form pyrite fes2 in reducing environments pyrite dissolution leads to low ph levels due to elevated h ions and further precipitation of fe2o3 ultimately changing the redox conditions of the environment inputs from the biosphere may begin with lichen and other microorganisms that secrete oxalic acid these microorganisms associated with the lichen community or independently inhabiting rocks include a number of bluegreen algae green algae various fungi and numerous bacteria lichen has long been viewed as the pioneers of soil development as the following 1997 isozaki statement suggests the initial conversion of rock into soil is carried on by the pioneer lichens and their successors the mosses in which the hairlike rhizoids assume the role of roots in breaking down the surface into fine dust however lichens are not necessarily the only pioneering organisms nor the earliest form of soil formation as it has been documented that seedbearing plants may occupy an area and colonize quicker than lichen also eolian sedimentation wind generated can produce high rates of sediment accumulation nonetheless lichen can certainly withstand harsher conditions than most vascular plants and although they have slower colonization rates do form the dominant group in alpine regions organic acids released from plant roots include acetic acid and citric acid during the decay of organic matter phenolic acids are released from plant matter and humic acid and fulvic acid are released by soil microbes these organic acids speed up chemical weathering by combining with some of the weathering products in a process known'
- 'parent material is the underlying geological material generally bedrock or a superficial or drift deposit in which soil horizons form soils typically inherit a great deal of structure and minerals from their parent material and as such are often classified based upon their contents of consolidated or unconsolidated mineral material that has undergone some degree of physical or chemical weathering and the mode by which the materials were most recently transported parent materials that are predominantly composed of consolidated rock are termed residual parent material the consolidated rocks consist of igneous sedimentary and metamorphic rock etc soil developed in residual parent material is that which forms in consolidated geologic material this parent material is loosely arranged particles are not cemented together and not stratified this parent material is classified by its last means of transport for example material that was transported to a location by glacier then deposited elsewhere by streams is classified as streamtransported parent material or glacial fluvial parent material glacial till morrainal the material dragged with a moving ice sheet because it is not transported with liquid water the material is not sorted by size there are two kinds of glacial till basal till carried at the base of the glacier and laid underneath it this till is typically very compacted and does not allow for quick water infiltration ablation till carried on or in the glacier and is laid down as the glacier melts this till is typically less compacted than basal till glaciolacustrine parent material that is created from the sediments coming into lakes that come from glaciers the lakes are typically ice margin lakes or other types formed from glacial erosion or deposition the bedload of the rivers containing the larger rocks and stones is deposited near the lake edge while the suspended sediments are settle out all over the lake bed glaciofluvial consist of boulders gravel sand silt and clay from ice sheets or glaciers they are transported sorted and deposited by streams of water the deposits are formed beside below or downstream from the ice glaciomarine these sediments are created when sediments have been transported to the oceans by glaciers or icebergs they may contain large boulders transported by and dropped from icebergs in the midst of finegrained sediments within water transported parent material there are several important types alluvium parent material transported by streams of which there are three main types floodplains are the parts of river valleys that are covered with water during floods due to their seasonal nature floods create stratified layers in which larger particles tend to settle nearer the channel and smaller particles settle nearer the edges of the flooding area alluvial fans are sedimentary areas formed by narrow valley streams that suddenly drop to lowlands'
- 'uses the physics of ice formation to develop a layeredhybrid material specifically ceramic suspensions are directionally frozen under conditions designed to promote the formation of lamellar ice crystals which expel the ceramic particles as they grow after sublimation of the water this results in a layered homogeneous ceramic scaffold that architecturally is a negative replica of the ice the scaffold can then be filled with a second soft phase so as to create a hard – soft layered composite this strategy is also widely applied to build other kinds of bioinspired materials like extremely strong and tough hydrogels metalceramic and polymerceramic hybrid biomimetic materials with fine lamellar or brickandmortar architectures the brick layer is extremely strong but brittle and the soft mortar layer between the bricks generates limited deformation thereby allowing for the relief of locally high stresses while also providing ductility without too much loss in strength additive manufacturing encompasses a family of technologies that draw on computer designs to build structures layer by layer recently a lot of bioinspired materials with elegant hierarchical motifs have been built with features ranging in size from tens of micrometers to one submicrometer therefore the crack of materials only can happen and propagate on the microscopic scale which wouldnt lead to the fracture of the whole structure however the timeconsuming of manufacturing the hierarchical mechanical materials especially on the nano and microscale limited the further application of this technique in largescale manufacturing layerbylayer deposition is a technique that as suggested by its name consists of a layerbylayer assembly to make multilayered composites like nacre some examples of efforts in this direction include alternating layers of hard and soft components of tinpt with an ion beam system the composites made by this sequential deposition technique do not have a segmented layered microstructure thus sequential adsorption has been proposed to overcome this limitation and consists of repeatedly adsorbing electrolytes and rinsing the tablets which results in multilayers thin film deposition focuses on reproducing the crosslamellar microstructure of conch instead of mimicking the layered structure of nacre using microelectro mechanical systems mems among mollusk shells the conch shell has the highest degree of structural organization the mineral aragonite and organic matrix are replaced by polysilicon and photoresist the mems technology repeatedly deposits a thin silicon film the interfaces are etched by reactive ion etching and then filled with photoresist there are three films deposited consecutively although the mems technology is expensive and more timeconsum'
|
-| 1 | - 'aerodynamics is a branch of dynamics concerned with the study of the motion of air it is a subfield of fluid and gas dynamics and the term aerodynamics is often used when referring to fluid dynamics early records of fundamental aerodynamic concepts date back to the work of aristotle and archimedes in the 2nd and 3rd centuries bc but efforts to develop a quantitative theory of airflow did not begin until the 18th century in 1726 isaac newton became one of the first aerodynamicists in the modern sense when he developed a theory of air resistance which was later verified for low flow speeds air resistance experiments were performed by investigators throughout the 18th and 19th centuries aided by the construction of the first wind tunnel in 1871 in his 1738 publication hydrodynamica daniel bernoulli described a fundamental relationship between pressure velocity and density now termed bernoullis principle which provides one method of explaining lift aerodynamics work throughout the 19th century sought to achieve heavierthanair flight george cayley developed the concept of the modern fixedwing aircraft in 1799 and in doing so identified the four fundamental forces of flight lift thrust drag and weight the development of reasonable predictions of the thrust needed to power flight in conjunction with the development of highlift lowdrag airfoils paved the way for the first powered flight on december 17 1903 wilbur and orville wright flew the first successful powered aircraft the flight and the publicity it received led to more organized collaboration between aviators and aerodynamicists leading the way to modern aerodynamics theoretical advances in aerodynamics were made parallel to practical ones the relationship described by bernoulli was found to be valid only for incompressible inviscid flow in 1757 leonhard euler published the euler equations extending bernoullis principle to the compressible flow regime in the early 19th century the development of the navierstokes equations extended the euler equations to account for viscous effects during the time of the first flights several investigators developed independent theories connecting flow circulation to lift ludwig prandtl became one of the first people to investigate boundary layers during this time although the modern theory of aerodynamic science did not emerge until the 18th century its foundations began to emerge in ancient times the fundamental aerodynamics continuity assumption has its origins in aristotles treatise on the heavens although archimedes working in the 3rd century bc was the first person to formally assert that a fluid could be treated as a continuum archimedes also introduced the concept that fluid flow was driven by a pressure gradient within the fluid this idea would later prove fundamental to the understanding of fluid flow in 1687 newtons principia presented newtons laws'
- 'the yaw drive is an important component of the horizontal axis wind turbines yaw system to ensure the wind turbine is producing the maximal amount of electric energy at all times the yaw drive is used to keep the rotor facing into the wind as the wind direction changes this only applies for wind turbines with a horizontal axis rotor the wind turbine is said to have a yaw error if the rotor is not aligned to the wind a yaw error implies that a lower share of the energy in the wind will be running through the rotor area the generated energy will be approximately proportional to the cosine of the yaw error when the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle an actuation mechanism able to provide that turning moment was necessary initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power another historical innovation was the fantail this device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor behind the nacelle in a 90° approximately orientation to the main rotor sweep plane in the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox and via a gearrimtopinion mesh to the tower of the windmill the effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind where the fantail would not face the wind thus stop turning ie the nacelle would stop to its new positionthe modern yaw drives even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept the main categories of yaw drives are the electric yaw drives commonly used in almost all modern turbines the hydraulic yaw drive hardly ever used anymore on modern wind turbines the gearbox of the yaw drive is a very crucial component since it is required to handle very large moments while requiring the minimal amount of maintenance and perform reliably for the whole lifespan of the wind turbine approx 20 years most of the yaw drive gearboxes have input to output ratios in the range of 20001 in order to produce the enormous turning moments required for the rotation of the wind turbine nacelle the gearrim and the pinions of the yaw drives are the components that finally transmit the turning moment from the yaw drives to the tower in order to turn the nacelle of the wind turbine around the tower axis z axis the main characteristics of the gearrim are its'
- 'the development of aerodynamics such as theodore von karman and max munk compressibility is an important factor in aerodynamics at low speeds the compressibility of air is not significant in relation to aircraft design but as the airflow nears and exceeds the speed of sound a host of new aerodynamic effects become important in the design of aircraft these effects often several of them at a time made it very difficult for world war ii era aircraft to reach speeds much beyond 800 kmh 500 mph some of the minor effects include changes to the airflow that lead to problems in control for instance the p38 lightning with its thick highlift wing had a particular problem in highspeed dives that led to a nosedown condition pilots would enter dives and then find that they could no longer control the plane which continued to nose over until it crashed the problem was remedied by adding a dive flap beneath the wing which altered the center of pressure distribution so that the wing would not lose its lifta similar problem affected some models of the supermarine spitfire at high speeds the ailerons could apply more torque than the spitfires thin wings could handle and the entire wing would twist in the opposite direction this meant that the plane would roll in the direction opposite to that which the pilot intended and led to a number of accidents earlier models werent fast enough for this to be a problem and so it wasnt noticed until later model spitfires like the mkix started to appear this was mitigated by adding considerable torsional rigidity to the wings and was wholly cured when the mkxiv was introduced the messerschmitt bf 109 and mitsubishi zero had the exact opposite problem in which the controls became ineffective at higher speeds the pilot simply couldnt move the controls because there was too much airflow over the control surfaces the planes would become difficult to maneuver and at high enough speeds aircraft without this problem could outturn them these problems were eventually solved as jet aircraft reached transonic and supersonic speeds german scientists in wwii experimented with swept wings their research was applied on the mig15 and f86 sabre and bombers such as the b47 stratojet used swept wings which delay the onset of shock waves and reduce drag in order to maintain control near and above the speed of sound it is often necessary to use either poweroperated allflying tailplanes stabilators or delta wings fitted with poweroperated elevons power operation prevents aerodynamic forces overriding the pilots control inputs finally another common problem that fits into this category is flutter at some speeds the airflow over the control'
|
+| Label | Examples |
+|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| 20 | - '##les approach which combined geography history and the sociological approaches of the annee sociologique many members of which were their colleagues at strasbourg to produce an approach which rejected the predominant emphasis on politics diplomacy and war of many 19th and early 20thcentury historians as spearheaded by historians whom febvre called les sorbonnistes instead they pioneered an approach to a study of longterm historical structures la longue duree over events and political transformations geography material culture and what later annalistes called mentalites or the psychology of the epoch are also characteristic areas of study the goal of the annales was to undo the work of the sorbonnistes to turn french historians away from the narrowly political and diplomatic toward the new vistas in social and economic historycofounder marc bloch 1886 – 1944 was a quintessential modernist who studied at the elite ecole normale superieure and in germany serving as a professor at the university of strasbourg until he was called to the sorbonne in paris in 1936 as professor of economic history blochs interests were highly interdisciplinary influenced by the geography of paul vidal de la blache 1845 – 1918 and the sociology of emile durkheim 1858 – 1917 his own ideas especially those expressed in his masterworks french rural history les caracteres originaux de lhistoire rurale francaise 1931 and feudal society were incorporated by the secondgeneration annalistes led by fernand braudel georges duby a leader of the school wrote that the history he taught relegated the sensational to the sidelines and was reluctant to give a simple accounting of events but strove on the contrary to pose and solve problems and neglecting surface disturbances to observe the long and mediumterm evolution of economy society and civilisationthe annalistes especially lucien febvre advocated a histoire totale or histoire tout court a complete study of a historic problem bloch was shot by the gestapo during the german occupation of france in world war ii for his active membership of the french resistance and febvre carried on the annales approach in the 1940s and 1950s it was during this time that he mentored braudel who would become one of the bestknown exponents of this school braudels work came to define a second era of annales historiography and was very influential throughout the 1960s and 1970s especially for his work on the mediterranean region in the era of philip ii of spain braudel developed the idea often associated with annalistes of different modes of historical time lhistoire quasi immobile the quasi motionless history of historical'
- 'is important because the persuasiveness of a source usually depends upon its history primary sources may include cases constitutions statutes administrative regulations and other sources of binding legal authority while secondary legal sources may include books the headnotes of case reports articles and encyclopedias legal writers usually prefer to cite primary sources because only primary sources are authoritative and precedential while secondary sources are only persuasive at best family history a secondary source is a record or statement of an event or circumstance made by a noneyewitness or by someone not closely connected with the event or circumstances recorded or stated verbally either at or sometime after the event or by an eyewitness at a time after the event when the fallibility of memory is an important factor consequently according to this definition a firsthand account written long after the event when the fallibility of memory is an important factor is a secondary source even though it may be the first published description of that event autobiographies an autobiography can be a secondary source in history or the humanities when used for information about topics other than its subject for example many firsthand accounts of events in world war i written in the postwar years were influenced by the then prevailing perception of the war which was significantly different from contemporary opinion original research jules r benjamin a students guide to history 2013 isbn 9781457621444 edward h carr what is history basingstoke palgrave 2001 isbn 9780333977019 wood gray historians handbook a key to the study and writing of history prospect heights il waveland press 1991 ©1964 isbn 9780881336269 derek harland a basic course in genealogy volume two research procedure and evaluation of evidence bookcraft inc 1958 worldcat record richard holmes tommy harpercollins 2004 isbn 9780007137510 martha c howell and walter prevenier from reliable sources an introduction to historical methods 2001 isbn 9780801435737 richard a marius and melvin e page a short guide to writing about history 8th edition 2012 isbn 9780205118601 hayden white metahistory the historical imagination in nineteenthcentury europe baltimore johns hopkins university press 1973 isbn 9780801814693'
- 'have a meticulous approach to reconstructing the costumes or material culture of past eras but who are perceived to lack much understanding of the cultural values and historical contexts of the periods in question a college or society of antiquaries was founded in london in c 1586 to debate matters of antiquarian interest members included william camden sir robert cotton john stow william lambarde richard carew and others this body existed until 1604 when it fell under suspicion of being political in its aims and was abolished by king james i papers read at their meetings are preserved in cottons collections and were printed by thomas hearne in 1720 under the title a collection of curious discourses a second edition appearing in 1771 in 1707 a number of english antiquaries began to hold regular meetings for the discussion of their hobby and in 1717 the society of antiquaries was formally reconstituted finally receiving a charter from king george ii in 1751 in 1780 king george iii granted the society apartments in somerset house and in 1874 it moved into its present accommodation in burlington house piccadilly the society was governed by a council of twenty and a president who is ex officio a trustee of the british museum the society of antiquaries of scotland was founded in 1780 and had the management of a large national antiquarian museum in edinburgh the society of antiquaries of newcastle upon tyne the oldest provincial antiquarian society in england was founded in 1813 in ireland a society was founded in 1849 called the kilkenny archaeological society holding its meetings at kilkenny in 1869 its name was changed to the royal historical and archaeological association of ireland and in 1890 to the royal society of antiquaries of ireland its office being transferred to dublin in france the societe des antiquaires de france was formed in 1813 by the reconstruction of the academie celtique which had existed since 1804 the american antiquarian society was founded in 1812 with its headquarters at worcester massachusetts in modern times its library has grown to over 4 million items and as an institution it is internationally recognized as a repository and research library for early pre1876 american printed materials in denmark the kongelige nordiske oldskriftselskab also known as la societe royale des antiquaires du nord or the royal society of northern antiquaries was founded at copenhagen in 1825 in germany the gesamtverein der deutschen geschichts und altertumsvereine was founded in 1852in addition a number of local historical and archaeological societies have adopted the word antiquarian in their titles these have included the cambridge antiquarian society'
|
+| 42 | - 'been described as the worlds largest repository of covid19 sequences and by far the worlds largest database of sarscov2 sequences by midapril 2021 gisaids sarscov2 database reached over 1200000 submissions a testament to the hard work of researchers in over 170 different countries only three months later the number of uploaded sarscov2 sequences had doubled again to over 24 million by late 2021 the database contained over 5 million genome sequences as of december 2021 over 6 million sequences had been submitted by april 2022 there were 10 million sequences accumulated and in january 2023 the number had reached 144 millionin january 2020 the sarscov2 genetic sequence data was shared through gisaid throughout the first year of the covid19 pandemic most of the sarscov2 wholegenome sequences that were generated and shared globally were submitted through gisaid when the sarscov2 omicron variant was detected in south africa by quickly uploading the sequence to gisaid the national institute for communicable diseases there was able to learn that botswana and hong kong had also reported cases possessing the same gene sequencein march 2023 gisaid temporarily suspended database access for some scientists removing raw data relevant to investigations of the origins of sarscov2 gisaid stated that they do not delete records from their database but data may become temporarily invisible during updates or corrections availability of the data was restored with an additional restriction that any analysis based thereon would not be shared with the public the board of friends of gisaid consists of peter bogner and two german lawyers who are not involved in the daytoday operations of the organisation scientific advice to the organization is provided by its scientific advisory council including directors of leading public health laboratories such as who collaborating centres for influenza in 2023 gisaids lack of transparency was criticized by some gisaid funders including the european commission and the rockefeller foundation with longterm funding being denied from international federation of pharmaceutical manufacturers and associations ifpma in june 2023 it was reported in vanity fair that bogner had said that gisaid will soon launch an independent compliance board responsible for addressing a wide range of governance matters the telegraph similarly reported that gisaids inhouse counsel was developing new governance processes intended to be transparent and allow for the resolution of scientific disputes without the involvement of bogner the creation of the gisaid database was motivated in part by concerns raised by researchers from developing countries with scientific american noting in 2009 that that a previous datasharing system run by who forced them to give up intellectual'
- 'viruses can be named based on the antibodies they react with the use of the antibodies which were once exclusively derived from the serum blood fluid of animals is called serology once an antibody – reaction has taken place in a test other methods are needed to confirm this older methods included complement fixation tests hemagglutination inhibition and virus neutralisation newer methods use enzyme immunoassays eiain the years before pcr was invented immunofluorescence was used to quickly confirm viral infections it is an infectivity assay that is virus species specific because antibodies are used the antibodies are tagged with a dye that is luminescencent and when using an optical microscope with a modified light source infected cells glow in the dark pcr is a mainstay method for detecting viruses in all species including plants and animals it works by detecting traces of virus specific rna or dna it is very sensitive and specific but can be easily compromised by contamination most of the tests used in veterinary virology and medical virology are based on pcr or similar methods such as transcription mediated amplification when a novel virus emerges such as the covid coronavirus a specific test can be devised quickly so long as the viral genome has been sequenced and unique regions of the viral dna or rna identified the invention of microfluidic tests as allowed for most of these tests to be automated despite its specificity and sensitivity pcr has a disadvantage in that it does not differentiate infectious and noninfectious viruses and tests of cure have to be delayed for up to 21 days to allow for residual viral nucleic acid to clear from the site of the infection in laboratories many of the diagnostic test for detecting viruses are nucleic acid amplification methods such as pcr some tests detect the viruses or their components as these include electron microscopy and enzymeimmunoassays the socalled home or selftesting gadgets are usually lateral flow tests which detect the virus using a tagged monoclonal antibody these are also used in agriculture food and environmental sciences counting viruses quantitation has always had an important role in virology and has become central to the control of some infections of humans where the viral load is measured there are two basic methods those that count the fully infective virus particles which are called infectivity assays and those that count all the particles including the defective ones infectivity assays measure the amount concentration of infective viruses in a sample of known volume for host cells plants or cultures of bacterial or animal cells are used laboratory animals such as mice'
- 'vpx is a virionassociated protein encoded by human immunodeficiency virus type 2 hiv2 and most simian immunodeficiency virus siv strains but that is absent from hiv1 it is similar in structure to the protein vpr that is carried by siv and hiv2 as well as hiv1 vpx is one of five accessory proteins vif vpx vpr vpu and nef carried by lentiviruses that enhances viral replication by inhibiting host antiviral factorsvpx enhances hiv2 replication in humans by counteracting the host factor samhd1 samhd1 is a host factor found in human myeloid cells such as dendritic cells and macrophages that restricts hiv1 replication by depleting the cytoplasmic pool of deoxynucleoside triphosphates needed for viral dna production samhd1 does not however restrict hiv2 replication in myeloid cells due to the presence of viral vpx vpx counteracts restriction by inducing the ubiquitinproteasomedependent degradation of samhd1 vpxmediated degradation of samhd1 therefore decreases deoxynucleoside triphosphate hydrolysis thereby increasing the availability of dntps for viral reverse transcription in the cytoplasm it has been postulated that samhd1 degradation is required for hiv2 replication because the hiv2 reverse transcriptase rt is less active than the hiv1 rt which would be the reason for the absence of vpx from hiv1 because vpx is required for hiv2 reverse transcription and the early stages of the viral life cycle it is packaged into virions in significant amountsvpx is also involved in the nuclear import of the hiv2siv genomes and associated proteins but the specific mechanisms and interactions are currently unknown although vpr and vpx are similar in size both are 100 amino acids with 2025 sequence similarity and structure both are predicted to have similar tertiary structure with three major helices they serve very different roles in viral replication vpx targets a host restriction factor for proteasomal degradation while vpr arrests the host cell cycle in the g2 phase however they are both involved in the import of the viral preintegration complex into the host nucleus'
|
+| 19 | - '##es insulin blood glucose from the portal vein enters liver cells hepatocytes insulin acts on the hepatocytes to stimulate the action of several enzymes including glycogen synthase glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful in this postprandial or fed state the liver takes in more glucose from the blood than it releases after a meal has been digested and glucose levels begin to fall insulin secretion is reduced and glycogen synthesis stops when it is needed for energy glycogen is broken down and converted again to glucose glycogen phosphorylase is the primary enzyme of glycogen breakdown for the next 8 – 12 hours glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel glucagon another hormone produced by the pancreas in many respects serves as a countersignal to insulin in response to insulin levels being below normal when blood levels of glucose begin to fall below the normal range glucagon is secreted in increasing amounts and stimulates both glycogenolysis the breakdown of glycogen and gluconeogenesis the production of glucose from other sources muscle glycogen appears to function as an immediate reserve source of available phosphorylated glucose in the form of glucose1phosphate for muscle cells glycogen contained within skeletal muscle cells are primarily in the form of β particles other cells that contain small amounts use it locally as well as muscle cells lack glucose6phosphatase which is required to pass glucose into the blood the glycogen they store is available solely for internal use and is not shared with other cells this is in contrast to liver cells which on demand readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organsskeletal muscle needs atp provides energy for muscle contraction and relaxation in what is known as the sliding filament theory skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity as well as throughout highintensity aerobic activity and all anaerobic activity during anaerobic activity such as weightlifting and isometric exercise the phosphagen system atppcr and muscle glycogen are the only substrates used as they do not require oxygen nor blood flowdifferent bioenergetic systems produce atp at different speeds with atp produced'
- 'glycogen storage disease type i gsd i is an inherited disease that prevents the liver from properly breaking down stored glycogen which is necessary to maintain adequate blood sugar levels gsd i is divided into two main types gsd ia and gsd ib which differ in cause presentation and treatment there are also possibly rarer subtypes the translocases for inorganic phosphate gsd ic or glucose gsd id however a recent study suggests that the biochemical assays used to differentiate gsd ic and gsd id from gsd ib are not reliable and are therefore gsd ibgsd ia is caused by a deficiency in the enzyme glucose6phosphatase gsd ib a deficiency in the transport protein glucose6phosphate translocase because glycogenolysis is the principal metabolic mechanism by which the liver supplies glucose to the body during fasting both deficiencies cause severe hypoglycemia and over time excess glycogen storage in the liver and in some cases in the kidneys because of the glycogen buildup gsd i patients typically present with enlarged livers from nonalcoholic fatty liver disease other functions of the liver and kidneys are initially intact in gsd i but are susceptible to other problems without proper treatment gsd i causes chronic low blood sugar which can lead to excessive lactic acid and abnormally high lipids in the blood and other problems frequent feedings of cornstarch or other carbohydrates are the principal treatment for all forms of gsd i gsd ib also features chronic neutropenia due to a dysfunction in the production of neutrophils in the bone marrow this immunodeficiency if untreated makes gsd ib patients susceptible to infection the principal treatment for this feature of gsd ib is filgrastim however patients often still require treatment for frequent infections and a chronically enlarged spleen is a common side effect gsd ib patients often present with inflammatory bowel diseaseit is the most common of the glycogen storage diseases gsd i has an incidence of approximately 1 in 100000 births in the american population and approximately 1 in 20000 births among ashkenazi jews the disease was named after german doctor edgar von gierke who first described it in 1929 early research into gsd i identified numerous clinical manifestations falsely thought to be primary features of the genetic disorder however continuing research has revealed that these clinical features are the consequences of only one in gsd ia or two in gsd ib'
- '##patic arteries and threaded through the gastroduodenal mostly or celiac artery the catheter is fixed in this position and the pump is placed in a subcutaneous pocket finally to confirm adequate placement and hepatic perfusion and to rule out extrahepatic perfusion a dye fluorescein or methylene blue is injected into the pump after the procedure and before starting the hai based treatment a technetium 99mlabeled macroaggregated albumin scan is performed to again confirm adequate hepatic perfusion and no misperfusion outside of the liver the complications of hai therapy can be divided into those related to the surgical placement of the pump technical catheterrelated complications and those related to the chemotherapeutic agents usedrelating to the surgical hai pump placement early postoperative complications consist of arterial injury leading to hepatic artery thrombosis inadequate perfusion of the entire liver due to the inability to identify an accessory hepatic artery extrahepatic perfusion to the stomach or duodenum or hematoma formation in the subcutaneous pump pocket late complications are more common and include inflammation or ulceration of the stomach or duodenum and pump pocket infectionthe most common catheter related complications include displacement of the catheter occlusion of the hepatic artery because of the catheter and catheter thrombosis these catheter related complications dont occur as frequently with increased surgical experience and with improvements in pump designthe most common toxicities caused by the chemotherapeutic agents were gastrointestinal symptoms chemical hepatitis and bone marrow inhibition it is important to note that the most serious and dose limiting complication of hai is hepatobiliary toxicity this occurs more commonly with fudr than any other chemotherapeutic agent patients undergoing hai therapy therefore have regular liver function tests to monitor any damage to the liver as previously mentioned studies have been carried out to come up with treatment algorithms to minimize this serious side effect it has been shown that adding leucovorin and fudr for infusion through the pump not only reduces the biliary toxicity of the drug but also increases the response rate however biliary sclerosis is not seen with hai using 5fu 5fu is associated with an increased risk of myelosuppression logically it would make sense to therefore consider alternating between hai fudr and hai 5fu'
|
+| 11 | - 'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'
- '##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'
- 'the biomedical research center brc is a research center at qatar university focusing on biomedical research brc was founded in 2014 and partners with the ministry of public health qatar and hamad medical corporation hmc the incidence of genetic disorders in qatar is high with the top three causes of death in the country cancer heart diseases and diabetes the government saw the creation of brc as a strategy for proactively preventing diseases to foster public healthbrc labs received the isoiec 17025 accreditation from the american association for laboratory accreditation a2la the centres research activities focus on the domains of infectious diseases virology and microbiology metabolic disorders and biomedical omics since its inauguration in 2014 brc researchers have published research papers with more than 530 publicationsthe centres research projects include antibiotic profiling of antibiotics resistant microbes in humans and animals one health approach identified for the first time the reason of why some obese people gets type2 diabetes while others do not conducted six research on covid19 to assist in fighting and recovery provided a study on protection against the omicron variant in qatar decoded the genetic code of qatari falcons and various endangered animal species dna sequence of the dugong sea cow study a nanomedicinebased preventative strategy to controlling diseases and improve health brc introduced the use of zebrafish as an animal model in biomedical research at qu and established a facility for it in 2015 the facility is used as a research unit to study many genetic diseases therefore ministry of public health qatar clearly articulated an institutional research policy irp on human use of zebrafish in research and qu circulated it to qu community for implementation the brc facilities include biosafety level 3 bsl3 built by certek usa it is equipped for viral and bacterial research on risk group 3 pathogens sequencing unit to conduct stateoftheart research in genomics mariam al maadeed sidra medical and research center'
|
+| 17 | - 'and rainfall there are many ways to date a core once dated it gives valuable information about changes of climate and terrain for example cores in the ocean floor soil and ice have altered the view of the geologic history of the pleistocene entirely reverse circulation drilling is a method in which rock cuttings are continuously extracted through the hollow drill rod and can be sampled for analysis the method may be faster and use less water than core drilling but does not produce cores of relatively undisturbed material so less information on the rock structure can be derived from analysis if compressed air is used for cutting extraction the sample remains uncontaminated is available almost immediately and the method has a low environmental impact core drill ice core integrated ocean drilling program scientific drilling'
- '##cial environments tend to be found in higher latitudes since there is more land at these latitudes in the north most of this effect is seen in the northern hemisphere however in lower latitudes the direct effect of the suns radiation is greater so the freezethaw effect is seen but permafrost is much less widespread altitude – air temperature drops by approximately 1 °c for every 100 m rise above sea level this means that on mountain ranges modern periglacial conditions are found nearer the equator than they are lower down ocean currents – cold surface currents from polar regions reduce mean average temperatures in places where they exert their effect so that ice caps and periglacial conditions will show nearer to the equator as in labrador for example conversely warm surface currents from tropical seas increases mean temperatures the cold conditions are then found only in more northerly places this is apparent in western north america which is affected by the north pacific current in the same way but more markedly the gulf stream affects western europe continentality – away from the moderating influence of the ocean seasonal temperature variation is more extreme and freezethaw goes deeper in the centres of canada and siberia the permafrost typical of periglaciation goes deeper and extends further towards the equator similarly solifluction associated with freezethaw extends into somewhat lower latitudes than on western coasts periglaciation results in a variety of ground conditions but especially those involving irregular mixed deposits created by ice wedges solifluction gelifluction frost creep and rockfalls periglacial environments trend towards stable geomorphologies coombe and head deposits – coombe deposits are chalk deposits found below chalk escarpments in southern england head deposits are more common below outcrops of granite on dartmoor patterned ground – patterned ground occurs where stones form circles polygons and stripes local topography affects which of these are expressed a process called frost heaving is responsible for these features solifluction lobes – solifluction lobes are formed when waterlogged soil slips down a slope due to gravity forming u shaped lobes blockfields or felsenmeer – blockfields are areas covered by large angular blocks traditionally believed to have been created by freezethaw action a good example of a blockfield can be found in the snowdonia national park wales blockfields are common in the unglaciated parts of the appalachian mountains in the northeastern united states such as at the river of rocks or hickory run boulder field lehigh county pennsylvaniaother landforms include bratschen palsa periglacial lake pingo'
- 'climate was cooler during the overarching little ice age than it is today ice cores scientists have studied the chemical composition of ice cores long tubes of ice that are drilled from glaciers and ice sheets to learn of past climate conditions tree rings the width of tree rings can be used to reconstruct past climate conditions as trees grow more slowly in cooler temperatures tree ring data from the little ice age seems to prove a reduction in solar activityoverall the evidence suggests that the amount of solar radiation reaching the earths surface was slightly lower during the grindelwald fluctuation and this reduction in solar radiation is thought to have contributed to the expansion of the glaciers human activities such as deforestation and land use changes are known to negatively affect local climate patterns william ruddiman a palaeoclimatologist proposed the hypothesis that human activity has been affecting the earths climate for much longer than previously thought in particular ruddiman has argued that the early adoption of agriculture and landuse practices by human societies beginning around 8000 years ago led to the release of significant amounts of greenhouse gases into the atmosphere which may have contributed to the warming of the earths climateit is difficult to accurately assess the extent of depopulation that occurred during both the 1500s and 1600s as reliable population data from this period is limited however it is known that this period was one of significant upheaval and change with many regions experiencing significant population drops due to wars plagues famines and natural disasters the bubonic plague for instance killed between 75 and 200 million people in europe alone it is also believed that an onset of disease during the little ice age may have led to further depopulationthis decline in population meant that cultivated lands became unkempt allowing for the regrowth of wild plants this is perceived to be the cause for the drop in atmospheric carbon dioxide in the sixteenth century thus exacerbating the extreme cooling period however of the causes depopulation is the least significant in historical records the grindelwald fluctuation is characterised by a further drop in temperatures and more frequent cold spells throughout many parts of the world the more notable records written by a jacobean weather enthusiast in bristol chronicle some of the effects the weather fluctuation had on agriculture and society they specifically discuss food shortages and crop failures taking precedence throughout the area'
|
+| 14 | - 'needle aspiration fna biopsy can be fast and least painful a very thin hollow needle and slight suction will be used to remove a small sample from under the nipple using a local anesthetic to numb the skin may not be necessary since a thin needle is used for the biopsy receiving an injection to prevent pain from the biopsy may be more painful than the biopsy itselfsome men develop a condition known as gynecomastia in which the breast tissue under the nipple develops and grows discharge from the nipple can occur the nipple may swell in some men possibly due to increased levels of estrogen changes in appearance may be normal or related to disease inverted nipples – this is normal if the nipples have always been indented inward and can easily point out when touched if the nipples are pointing in and this is new this is an unexpected change skin puckering of the nipple – this can be caused by scar tissue from surgery or an infection often scar tissue forms for no reason most of the time this issue does not need treatment this is an unexpected change this change can be of concern since puckering or retraction of the nipple can indicate an underlying change in breast tissue that may be cancerous the nipple is warm to the touch red or painful – this can be an infection it is rarely due to breast cancer scaly flaking or itchy nipple – this is most often due to eczema or a bacterial or fungal infection this change is not expected flaking scaly or itchy nipples can be a sign of pagets disease thickened skin with large pores – this is called peau dorange because the skin looks like an orange peel an infection in the breast or inflammatory breast cancer can cause this problem this is not an expected change retracted nipples – the nipple was raised above the surface but changes begins to pull inward and does not come out when stimulatedthe average projection and size of human female nipples is slightly more than 3⁄8 inch 95 mm symptoms of breast cancer can often be seen first by changes of the nipple and areola although not all women have the same symptoms and some people do not have any signs or symptoms at all a person may find out they have breast cancer after a routine mammogram warning signs can include new lump in the nipple or breast or armpit thickening or swelling of part of the breast areola or nipple irritation or dimpling of breast skin redness or flaky skin in the nipple area or the breast pulling in of the nipple or pain in the nipple area nipple discharge other than breast milk including blood any change'
- 'the mother over the chorion frondosum this part of the endometrium is called the decidua basalis forms the decidual plate the decidual plate is tightly attached to the chorion frondosum and goes on to form the actual placenta endometrium on the opposite side to the decidua basalis is the decidua parietalis this fuses with the chorion laevae thus filling up the uterine cavityin the case of twins dichorionic placentation refers to the presence of two placentas in all dizygotic and some monozygotic twins monochorionic placentation occurs when monozygotic twins develop with only one placenta and bears a higher risk of complications during pregnancy abnormal placentation can lead to an early termination of pregnancy for example in preeclampsia as placentation often results during the evolution of live birth the more than 100 origins of live birth in lizards and snakes squamata have seen close to an equal number of independent origins of placentation this means that the occurrence of placentation in squamata is more frequent than in all other vertebrates combined making them ideal for research on the evolution of placentation and viviparity itself in most squamates two separate placentae form utilising separate embryonic tissue the chorioallantoic and yolksac placentae in species with more complex placentation we see regional specialisation for gas amino acid and lipid transport placentae form following implantation into uterine tissue as seen in mammals and formation is likely facilitated by a plasma membrane transformationmost reptiles exhibit strict epitheliochorial placentation eg pseudemoia entrecasteauxii however at least two examples of endotheliochorial placentation have been identified mabuya sp and trachylepis ivensi unlike eutherian mammals epitheliochorial placentation is not maintained by maternal tissue as embryos do not readily invade tissues outside of the uterus the placenta is an organ that has evolved multiple times independently evolved relatively recently in some lineages and exists in intermediate forms in living species for these reasons it is an outstanding model to study the evolution of complex organs in animals research into the genetic mechanisms that underpin the evolution of the placenta have been conducted in a diversity of animals including reptiles seahorses and mammalsthe genetic processes that support the evolution of the placenta can be best understood by separating those that result'
- 'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'
|
+| 36 | - 'to some decision or course of action socrates great myth illustrates this motif most clearly when the soul is depicted as a charioteer and its horses being led around a heavenly circuit this is the occasion for the first appearance in platos dialogues of the prominent platonic doctrine that life is motion the soul being the principle or source of life is that which moves itself as opposed to inanimate objects that require an external source of motion to move them the view that life is selfmotion and that the soul is a selfmover is used by plato to guarantee the immortality of the soul making this a novel argument for the souls immortality not found in the phaedo plato relies further on the view that the soul is a mind in order to explain how its motions are possible plato combines the view that the soul is a selfmover with the view that the soul is a mind in order to explain how the soul can move things in the first place eg how it can move the body to which it is attached in life souls move things by means of their thoughts in thomas manns novella death in venice the narrators young love tadzio is associated with phaedrus in mary renaults 1953 novel the charioteer a text of phaedrus is passed among the characters gay men during world war ii and the image of the charioteer and his white and black horses recurs as the protagonist struggles to choose between consummated and unconsummated love in a key scene from the film adaptation of maurice students including maurice attend dean cornwalliss translation class in which two undergraduates orally translate into english the text based on phaedrus stephanus 251a 255a – e during which the dean instructs one to omit the reference to the unspeakable vice of the greeks the 2016 film knight of cups by terrence malick is inspired in part by phaedrus in robert m pirsigs fictionalized autobiographical novel zen and the art of motorcycle maintenance pirsig refers to his past self from before undergoing electroconvulsive therapy in the third person and using the name phaedrus intended to reflect his opposition to certain educational and philosophical ideas the character reappears in the followup lila an inquiry into morals in virginia woolfs 1922 novel jacobs room jacob reads phaedrus alone in his room after a visit to the enormous mind as woolf characterizes the british museum jowett translation at standardebooks greek text at perseus plato nichols j h tr and ed phaedrus cornell university press'
- 'other lacks so much the betterthe first two of young becker and pikes four phases of written rogerian argument are based on the first two of rapoports three principles of ethical debate the third of rapoports principles — increasing the perceived similarity between self and other — is a principle that young becker and pike considered to be equally as important as the other two but they said it should be an attitude assumed throughout the discourse and is not a phase of writingmaxine hairston in a section on rogerian or nonthreatening argument in her textbook a contemporary rhetoric advised that one shouldnt start writing with a detailed plan in mind but might start by making four lists the others concerns ones own key points anticipated problems and points of agreement or common ground she gave a different version of young becker and pikes four phases which she expanded to five and called elements of the nonthreatening argument a brief and objective statement of the issue a neutrally worded analysis of the others position a neutrally worded analysis of ones own position a statement of the common aspects goals and values that the positions share and a proposal for resolving the issue that shows how both sides may gain she said that the rogerian approach requires calm patience and effort and will work if one is more concerned about increasing understanding and communication than about scoring a triumph in a related article she noted the similarity between rogerian argument and john stuart mills wellknown phrase from on liberty he who knows only his own side of the case knows little of thatrobert keith millers textbook the informed argument first published in 1986 presented five phases adapted from an earlier textbook by richard coe millers phases were an introduction to the problem a summary of views that oppose the writers position a statement of understanding of the region of validity of the opposing views a statement of the writers position a statement of the situations in which the writers position has merit and a statement of the benefits of accepting the writers positionin 1992 rebecca stephens built on the vague and abstract rogerian principles of other rhetoricians to create a set of 23 concrete and detailed questions that she called a rogerianbased heuristic for rhetorical invention intended to help people think in a rogerian way while discovering ideas and arguments for example the first two of her 23 questions are what is the nature of the issue in general terms and she recommended that the answer should itself be stated as a question and whose lives are affected by the issue the last two questions are what would have to happen to eliminate the disagreement among the opposing groups and what are the chances that this will occur lisa'
- 'reestablishes equilibrium and health in the collective imaginary which are jeopardized by the repressive aspects of societythe state of political satire in a given society reflects the tolerance or intolerance that characterizes it and the state of civil liberties and human rights under totalitarian regimes any criticism of a political system and especially satire is suppressed a typical example is the soviet union where the dissidents such as aleksandr solzhenitsyn and andrei sakharov were under strong pressure from the government while satire of everyday life in the ussr was allowed the most prominent satirist being arkady raikin political satire existed in the form of anecdotes that made fun of soviet political leaders especially brezhnev famous for his narrowmindedness and love for awards and decorations satire is a diverse genre which is complex to classify and define with a wide range of satiric modes satirical literature can commonly be categorized as either horatian juvenalian or menippean horatian horatian satire named for the roman satirist horace 65 – 8 bce playfully criticizes some social vice through gentle mild and lighthearted humour horace quintus horatius flaccus wrote satires to gently ridicule the dominant opinions and philosophical beliefs of ancient rome and greece rather than writing in harsh or accusing tones he addressed issues with humor and clever mockery horatian satire follows this same pattern of gently ridiculing the absurdities and follies of human beingsit directs wit exaggeration and selfdeprecating humour toward what it identifies as folly rather than evil horatian satires sympathetic tone is common in modern society a horatian satirists goal is to heal the situation with smiles rather than by anger horatian satire is a gentle reminder to take life less seriously and evokes a wry smile juvenalian juvenalian satire named for the writings of the roman satirist juvenal late first century – early second century ad is more contemptuous and abrasive than the horatian juvenal disagreed with the opinions of the public figures and institutions of the republic and actively attacked them through his literature he utilized the satirical tools of exaggeration and parody to make his targets appear monstrous and incompetent juvenals satire follows this same pattern of abrasively ridiculing societal structures juvenal also unlike horace attacked public officials and governmental organizations through his satires regarding their opinions as not just wrong but evil following in this tradition juvenalia'
|
+| 27 | - 'rod is so small newtons third law of physics applies for any action there is a reaction when the electrons are pulled across the surface of the rod so too is the rod pulled in the opposite direction the first recorded success of a nanosubmarine was performed by a team of students led by dan peer from tel aviv university in israel this was a continuation to peers work at harvard on nanosubmarines and targeted drug delivery tests have proven successful in delivering drugs to heal mice with ulcerative colitis tests will continue and the team plans to experiment on the human body soon fantastic voyage novel and movie based on the nanosubmarine theme'
- 'electronbeaminduced deposition ebid is a process of decomposing gaseous molecules by an electron beam leading to deposition of nonvolatile fragments onto a nearby substrate the electron beam is usually provided by a scanning electron microscope which results in high spatial accuracy potentially below one nanometer and the possibility to produce freestanding threedimensional structures the focused electron beam of a scanning electron microscope sem or scanning transmission electron microscope stem is commonly used another method is ionbeaminduced deposition ibid where a focused ion beam is applied instead precursor materials are typically liquid or solid and gasified prior to deposition usually through vaporization or sublimation and introduced at accurately controlled rate into the highvacuum chamber of the electron microscope alternatively solid precursors can be sublimated by the electron beam itself when deposition occurs at a high temperature or involves corrosive gases a specially designed deposition chamber is used it is isolated from the microscope and the beam is introduced into it through a micrometresized orifice the small orifice size maintains differential pressure in the microscope vacuum and deposition chamber no vacuum such deposition mode has been used for ebid of diamondin the presence of the precursor gas the electron beam is scanned over the substrate resulting in deposition of material the scanning is usually computercontrolled the deposition rate depends on a variety of processing parameters such as the partial precursor pressure substrate temperature electron beam parameters applied current density etc it usually is in the order of 10 nms primary electron energies in sems or stems are usually between 10 and 300 kev where reactions induced by electron impact ie precursor dissociation have a relatively low cross section the majority of decomposition occurs via low energy electron impact either by low energy secondary electrons which cross the substratevacuum interface and contribute to the total current density or inelastically scattered backscattered electrons primary stem electrons can be focused into spots as small as 0045 nm while the smallest structures deposited so far by ebid are point deposits of 07 nm diameter deposits usually have a larger lateral size than the beam spot size the reason are the socalled proximity effects meaning that secondary backscattered and forward scattered if the beam dwells on already deposited material electrons contribute to the deposition as these electrons can leave the substrate up to several microns away from the point of impact of the electron beam depending on its energy material deposition is not necessarily confined to the irradiated spot to overcome this problem compensation algorithms can be applied which is typical for electron beam lithography as of 2008 the range of materials deposited by ebid included al au amor'
- '##onment this presents a challenge in maintaining protein arrays in a stable condition over extended periods of time in situ methods — invented and published by mingyue he and michael taussig in 2001 — involve onchip synthesis of proteins as and when required directly from the dna using cellfree protein expression systems since dna is a highly stable molecule it does not deteriorate over time and is therefore suited to longterm storage this approach is also advantageous in that it circumvents the laborious and often costly processes of separate protein purification and dna cloning since proteins are made and immobilised simultaneously in a single step on the chip surface examples of in situ techniques are pisa protein in situ array nappa nucleic acid programmable protein array and dapa dna array to protein array there are three types of protein microarrays that are currently used to study the biochemical activities of proteins analytical microarrays are also known as capture arrays in this technique a library of antibodies aptamers or affibodies is arrayed on the support surface these are used as capture molecules since each binds specifically to a particular protein the array is probed with a complex protein solution such as a cell lysate analysis of the resulting binding reactions using various detection systems can provide information about expression levels of particular proteins in the sample as well as measurements of binding affinities and specificities this type of microarray is especially useful in comparing protein expression in different solutions for instance the response of the cells to a particular factor can be identified by comparing the lysates of cells treated with specific substances or grown under certain conditions with the lysates of control cells another application is in the identification and profiling of diseased tissues reverse phase protein microarray rppa involve complex samples such as tissue lysates cells are isolated from various tissues of interest and are lysed the lysate is arrayed onto the microarray and probed with antibodies against the target protein of interest these antibodies are typically detected with chemiluminescent fluorescent or colorimetric assays reference peptides are printed on the slides to allow for protein quantification of the sample lysates rpas allow for the determination of the presence of altered proteins or other agents that may be the result of disease specifically posttranslational modifications which are typically altered as a result of disease can be detected using rpas functional protein microarrays also known as target protein arrays are constructed by immobilising large numbers of purified proteins and are used to'
|
+| 9 | - 'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'
- 'the second step of this process has recently fallen into question for the past few decades the common view was that a trimeric multiheme ctype hao converts hydroxylamine into nitrite in the periplasm with production of four electrons 12 the stream of four electrons is channeled through cytochrome c554 to a membranebound cytochrome c552 two of the electrons are routed back to amo where they are used for the oxidation of ammonia quinol pool the remaining two electrons are used to generate a proton motive force and reduce nadp through reverse electron transportrecent results however show that hao does not produce nitrite as a direct product of catalysis this enzyme instead produces nitric oxide and three electrons nitric oxide can then be oxidized by other enzymes or oxygen to nitrite in this paradigm the electron balance for overall metabolism needs to be reconsidered nitrite produced in the first step of autotrophic nitrification is oxidized to nitrate by nitrite oxidoreductase nxr 2 it is a membraneassociated ironsulfur molybdo protein and is part of an electron transfer chain which channels electrons from nitrite to molecular oxygen the enzymatic mechanisms involved in nitriteoxidizing bacteria are less described than that of ammonium oxidation recent research eg woznica a et al 2013 proposes a new hypothetical model of nob electron transport chain and nxr mechanisms here in contrast to earlier models the nxr would act on the outside of the plasma membrane and directly contribute to a mechanism of proton gradient generation as postulated by spieck and coworkers nevertheless the molecular mechanism of nitrite oxidation is an open question the twostep conversion of ammonia to nitrate observed in ammoniaoxidizing bacteria ammoniaoxidizing archaea and nitriteoxidizing bacteria such as nitrobacter is puzzling to researchers complete nitrification the conversion of ammonia to nitrate in a single step known as comammox has an energy yield ∆g° ′ of −349 kj mol−1 nh3 while the energy yields for the ammoniaoxidation and nitriteoxidation steps of the observed twostep reaction are −275 kj mol−1 nh3 and −74 kj mol−1 no2− respectively these values indicate that it would be energetically favourable for an organism to carry out complete nitrification from ammonia to nitrate comammox rather'
- 'young animals and nonnative breeds the clinical signs of disease are caused by an increased vascular permeability and consequent oedema and hypovolemia the symptoms include neurological signs such as tremors and head pressing respiratory signs such as coughing and nasal discharge and systemic signs such as fever and loss of appetite physical examination may reveal petechiae of the mucous membranes tachycardia and muffled heart sounds heartwater can also cause reproductive and gastrointestinal disease it is frequently fatal on post mortem examination a light yellow transudate that coagulates on exposure to air is often found within the thorax pericardium and abdomen most fatal cases have the hydropericardium that gives the disease its common name pulmonary oedema and mucosal congestion are regularly seen along with frothy fluid in the airways and cut surfaces of the lungs to definitively diagnose the disease c ruminantium must be demonstrated either in preparations of the hippocampus under giemsa staining or by histopathology of brain or kidney during the early stages of disease animals may be treated with sulfonamides and tetracyclines in advanced disease prognosis is poor tetracyclines can also be used prophylactically when animals are introduced into an area endemic with heartwater ectoparasiticides used as dips can be used to reduce exposure the animals exposure to bont ticks in areas endemic for heartwater the use of dips against other ticks of domestic animals such as rhipicephalus boophilus and hyalomma species is likely and this will usually contribute to control of vectors of e ruminantium a live blood vaccine is available for protection of young stock but animals may require treatment for the disease after vaccination several experimental vaccines are currently being developed examples include attenuated recombinant and multiepitope dna vaccines depending on the species of the animal the mortality rate of the disease may vary from 5 to 90 mortality rates appear to be the highest within the various sheep and goat species but this is not always the case as some sheep species such as the afrikaner have mortality rates only reaching as high as 6 heartwater is notifiable to the world organization for animal health the us department of agriculture believes that an outbreak in the us could cost the livestock industry up to 762 million in losses annually the tick that carries the disease is thought to be capable of being transported by migratory birds from the caribbean to at least florida the'
|
+| 29 | - 'fixed circle of latitude or zonal region if the coriolis parameter is large the effect of the earths rotation on the body is significant since it will need a larger angular frequency to stay in equilibrium with the coriolis forces alternatively if the coriolis parameter is small the effect of the earths rotation is small since only a small fraction of the centripetal force on the body is canceled by the coriolis force thus the magnitude of f displaystyle f strongly affects the relevant dynamics contributing to the bodys motion these considerations are captured in the nondimensionalized rossby number in stability calculations the rate of change of f displaystyle f along the meridional direction becomes significant this is called the rossby parameter and is usually denoted β ∂ f ∂ y displaystyle beta frac partial fpartial y where y displaystyle y is the in the local direction of increasing meridian this parameter becomes important for example in calculations involving rossby waves beta plane earths rotation rossbygravity waves'
- 'of silicic acid to nitrate because larger diatoms that require silicic acid to make their opal silica shells are less prevalent unlike the southern ocean and the north pacific the equatorial pacific experiences temporal silicate availability which leads to large seasonal diatom bloomsthe distribution of trace metals and relative abundance of macronutrients are reflected in the plankton community structure for example the selection of phytoplankton with a high surface area to volume ratio results in hnlc regions being dominated by nano and picoplankton this ratio allows for optimal utilization of available dissolved nutrients larger phytoplankton such as diatoms cannot energetically sustain themselves in these regions common picoplankton within these regions include genera such as prochlorococcus not generally found in the north pacific synechococcus and various eukaryotes grazing protists likely control the abundance and distribution of these small phytoplanktonthe generally lower net primary production in hnlc zones results in lower biological drawdown of atmospheric carbon dioxide and thus these regions are generally considered a net source of carbon dioxide to the atmosphere hnlc areas are of interest to geoengineers and some in the scientific community who believe fertilizing large patches of these waters with iron could potentially lower dissolved carbon dioxide and offset increased anthropogenic carbon emissions analysis of antarctic ice core data over the last million years shows correlation between high levels of dust and low temperature indicating that addition of diffuse ironrich dust to the sea has been a natural amplifier of climate cooling the discovery and naming of the first hnlc region the north pacific was formalized in a seminal paper published in 1988 the study concluded that surface waters of the eastern north pacific are generally dominated by picoplankton despite the relative abundance of macronutrients in other words larger phytoplankton such as diatoms which thrive in nutrientrich waters were not found instead the surface waters were replete with smaller pico and nanoplankton based on laboratory nutrient experiments iron was hypothesized to be a key limiting micronutrientthe pacific ocean is the largest and oldest body of water on earth the north pacific is characterized by the general clockwise rotation of the north pacific gyre which is driven by trade winds spatial variations in tradewinds result in cooler air temperatures in the western north pacific and milder air temperatures in the eastern north pacific ie subarctic pacific iron is supplied to the north pacific by dust storms that occur in asia'
- 'atmospheric pressure 101325 pa whereas water has a density of 09998 – 0999863 gcm3 at the same temperature and pressure liquid water is densest essentially 100 gcm3 at 4 °c and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached this is due to hydrogen bonding dominating the intermolecular forces which results in a packing of molecules less compact in the solid density of ice increases slightly with decreasing temperature and has a value of 09340 gcm3 at −180 °c 93 kwhen water freezes it increases in volume about 9 for fresh water the effect of expansion during freezing can be dramatic and ice expansion is a basic cause of freezethaw weathering of rock in nature and damage to building foundations and roadways from frost heaving it is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes the result of this process is that ice in its most common form floats on liquid water which is an important feature in earths biosphere it has been argued that without this property natural bodies of water would freeze in some cases permanently from the bottom up resulting in a loss of bottomdependent animal and plant life in fresh and sea water sufficiently thin ice sheets allow light to pass through while protecting the underside from shortterm weather extremes such as wind chill this creates a sheltered environment for bacterial and algal colonies when sea water freezes the ice is riddled with brinefilled channels which sustain sympagic organisms such as bacteria algae copepods and annelids which in turn provide food for animals such as krill and specialised fish like the bald notothen fed upon in turn by larger animals such as emperor penguins and minke whaleswhen ice melts it absorbs as much energy as it would take to heat an equivalent mass of water by 80 °c during the melting process the temperature remains constant at 0 °c while melting any energy added breaks the hydrogen bonds between ice water molecules energy becomes available to increase the thermal energy temperature only after enough hydrogen bonds are broken that the ice can be considered liquid water the amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion as with water ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen – hydrogen o – h bond stretch compared with water this absorption is shifted toward slightly lower energies thus ice appears blue with'
|
+| 13 | - 'has offered artworks in the form of graphics downloadable to the home personal computer – for example by peter halley the thing has enabled a diverse group of artists critics curators and activists to use the internet in its early stages at its core the thing is a social network made up of individuals from diverse backgrounds with a wide range of expert knowledge from this social hub the thing has built an array of programs and initiatives in both technological and cultural networks during its first five years tt became widely recognized as one of the founding and leading online centers for new media culture its activities include hosting artists projects and mailing lists as well as publishing cultural criticism the thing has also organized many public events and symposia on such topics as the state of new media arts the preservation of online privacy artistic innovations in robotics and the possibilities of community empowerment through wireless technologies in 1997 thingnet communications llc an internet service provider isp was incorporated by wolfgang staehle gisela ehrenfried and max kossatz the isp was to provide a financial backbone for the thing inc a 501 c 3 non profit organization thingnet has hosted arts and activist groups and publications including ps1 contemporary art center artforum mabou mines willoughby sharp gallery zingmagazine journal of contemporary art rtmark and tenantnet among many others artists and projects associated with thingnet have included sawad brooks heath bunting cercle ramo nash vuk cosic ricardo dominguez ursula endlicher etoy gh hovagimyan jerome joy john klima jenny marketou mariko mori olivier mosset prema murty mark napier joseph nechvatal phil niblock daniel pflumm francesca da rimini beat streuli and beth stryker the thing amsterdam was founded by walter van der cruijsen the thing basel was founded by barbara strebel and rik gelles the thing berlin was founded by ulf schleth the thing cologne was founded by michael krome the thing dusseldorf was founded by jorg sasse the thing frankfurt was founded by andreas kallfelz the thing hamburg 1993 – 94 was founded by hansjoachim lenger the thing hamburg 2006 – 2009 was founded by the local art association the thing hamburg the thing london was founded by andreas ruethi the thing new york was founded by wolfgang staehle the thing stockholm was founded by magnus borg the thing vienna was founded by helmut mark and max kossatz the thing roma was founded by marco deseriis and giuseppe marano'
- 'of using locative media to better understand and connect in their environmentsyzygryd is a collaboration with three other arts organizations interpretive arson false profit labs ardent heavy industries to create a large scale interactive art piece to be unveiled at the 2010 burning man event the first five resident artists alphonzo solorzano gabriel dunne ryan alexander miles stemper and daniel massey moved into the space in july 2009 in 2010 three of these resident artists remained gabriel dunne ryan alexander and daniel massey in 2021 gray area partnered with the human rights foundation to launch the art in protest residency program the program s an opportunity for artists whose art is dedicated to promoting democracy and human rights globally to explore and expand their digital practices the gray area incubator is a peerdriven community of creators developing work at the intersection of art and technology membership is a 6month commitment though many have continued on much longer to develop their works in the incubator artists work in the disciplines of visual media arts creative code virtual augmented reality civic engagement digital activism social entrepreneurship data science sound audio and software hardware gray areas josette melchor was selected as one of the five innovators showcased on fords the edge of progress tourafter the 2016 oakland ghostship warehouse fire gray area raised approximately 13 million from over 12000 donors which it distributed to 390 applicants ranging from deceased victims next of kin displaced residents people injured in the fire as well as people who would not be acknowledged by traditional disaster relief organizations including chosen family within marginalized communities'
- 'nfts being used in the filmindustry include a collection of nftartworks for godzilla vs kong the release of both kevin smiths horrormovie killroy was here and the 2021 film zero contact as nfts in 2021 in april 2021 an nft was released for the score of the movie triumph composed by gregg leonard in november 2021 film director quentin tarantino released seven nfts based on uncut scenes of pulp fiction miramax subsequently filed a lawsuit claiming that their film rights were violated and that the original 1993 contract with tarantino gave them the right to mint nfts in relation to pulp fiction in august 2022 muse released album will of the people as 1000 nfts and it became the first album for which nft sales would qualify for the uk and australian chartsby february 2021 nfts accounted for us25 million of revenue generated through the sale of artwork and songs as nfts on february 28 2021 electronic dance musician 3lau sold a collection of 33 nfts for a total of us117 million to commemorate the threeyear anniversary of his ultraviolet album on march 3 2021 an nft was made to promote the kings of leon album when you see yourself other musicians who have used nfts include american rapper lil pump grimes visual artist shepard fairey in collaboration with record producer mike dean and rapper eminema paper presented at the 40th international conference on information systems in munich in 2019 suggested using nfts as tickets for different types of events this would enable organizers of the respective events or artists performing there to receive royalties on the resale of each ticket other associated files a number of internet memes have been associated with nfts which were minted and sold by their creators or by their subjects examples include doge an image of a shiba inu dog as well as charlie bit my finger nyan cat and disaster girl some virtual worlds often marketed as metaverses have incorporated nfts as a means of trading virtual items and virtual real estate some pornographic works have been sold as nfts though hostility from nft marketplaces towards pornographic material has presented significant drawbacks for creators by using nfts people engaged in this area of the entertainmentindustry are able to publish their works without thirdparty platforms being able to delete them the first credited political protest nft destruction of nazi monument symbolizing contemporary lithuania was a video filmed by professor stanislovas tomas on april 8 2019 and minted on march 29 2021 in the video tomas uses a sledgehammer to destroy a statesponsored'
|
+| 7 | - 'lot of solutions available for people with hearing impairments some examples of solutions would be blinking lights on different things like their phones alarms and things that are important to alert them cochlear implants are an option too cochlear implants are surgically placed devices that stimulate the cochlear nerve in order to help the person hear a cochlear implant is used instead of hearing aids in order to help when someone has difficulties understanding speech in a cultural context deaf culture refers to a tightknit cultural group of people whose primary language is signed and who practice social and cultural norms which are distinct from those of the surrounding hearing community this community does not automatically include all those who are clinically or legally deaf nor does it exclude every hearing person according to baker and padden it includes any person who identifies himherself as a member of the deaf community and other members accept that person as a part of the community an example being children of deaf adults with normal hearing ability it includes the set of social beliefs behaviors art literary traditions history values and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication members of the deaf community tend to view deafness as a difference in human experience rather than a disability or diseasemany nondisabled people continue to assume that deaf people have no autonomy and fail to provide people with support beyond hearing aids which is something that must be addressed different nongovernmental organizations around the world have created programs towards closing the gap between deaf and nondisabled people in developing countries the quota international organization with headquarters in the united states provided immense educational support in the philippines where it started providing free education to deaf children in the leganes resource center for the deaf the sounds seekers british organization also provided support by offering audiology maintenance technology to better assist those who are deaf in hardtoreach places the nippon foundation also supports deaf students at gallaudet university and the national technical institute for the deaf through sponsoring international scholarships programs to encourage students to become future leaders in the deaf community the more aid these organizations give to the deaf people the more opportunities and resources disabled people must speak up about their struggles and goals that they aim to achieve when more people understand how to leverage their privilege for the marginalized groups in the community then we can build a more inclusive and tolerant environment for the generations that are yet to come the first known record of sign language in history comes from platos cratylus written in the fifth century bce in a dialogue on the correctness of names socrates says suppose'
- 'the ear canal external acoustic meatus external auditory meatus eam is a pathway running from the outer ear to the middle ear the adult human ear canal extends from the pinna to the eardrum and is about 25 centimetres 1 in in length and 07 centimetres 03 in in diameter the human ear canal is divided into two parts the elastic cartilage part forms the outer third of the canal its anterior and lower wall are cartilaginous whereas its superior and back wall are fibrous the cartilage is the continuation of the cartilage framework of pinna the cartilaginous portion of the ear canal contains small hairs and specialized sweat glands called apocrine glands which produce cerumen ear wax the bony part forms the inner two thirds the bony part is much shorter in children and is only a ring annulus tympanicus in the newborn the layer of epithelium encompassing the bony portion of the ear canal is much thinner and therefore more sensitive in comparison to the cartilaginous portion size and shape of the canal vary among individuals the canal is approximately 25 centimetres 1 in long and 07 centimetres 028 in in diameter it has a sigmoid form and runs from behind and above downward and forward on the crosssection it is of oval shape these are important factors to consider when fitting earplugs due to its relative exposure to the outside world the ear canal is susceptible to diseases and other disorders some disorders include atresia of the ear canal cerumen impaction bone exposure caused by the wearing away of skin in the canal auditory canal osteoma bony outgrowths of the temporal bone cholesteatoma contact dermatitis of the ear canal fungal infection otomycosis ear mites in animals ear myiasis an extremely rare infestation of maggots foreign body in ear granuloma a scar usually caused by tympanostomy tubes otitis externa swimmers ear bacteriacaused inflammation of the ear canal stenosis a gradual closing of the canal earwax also known as cerumen is a yellowish waxy substance secreted in the ear canals it plays an important role in the human ear canal assisting in cleaning and lubrication and also provides some protection from bacteria fungi and insects excess or impacted cerumen can press against the eardrum andor occlude the external auditory canal and impair hearing causing conductive hearing loss if left untreated cerumen impaction can also increase the risk of developing an infection within the ear canal list of specialized glands within the'
- '##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'
|
+| 23 | - 'tolerogenic dendritic cells a k a toldcs tdcs or dcregs are heterogenous pool of dendritic cells with immunosuppressive properties priming immune system into tolerogenic state against various antigens these tolerogenic effects are mostly mediated through regulation of t cells such as inducing t cell anergy t cell apoptosis and induction of tregs toldcs also affect local microenvironment toward tolerogenic state by producing antiinflammatory cytokines toldcs are not lineage specific and their immunesuppressive functions is due to their state of activation andor differentiation generally properties of all types of dendritic cells can be highly affected by local microenvironment such as presence of pro or antiinflammatory cytokines therefore tolerogenic properties of toldcs are often context dependant and can be even eventually overridden into proinflammatory phenotypetolerogenic dcs present a potential strategy for treatment of autoimmune diseases allergic diseases and transplant rejections moreover agspecific tolerance in humans can be induced in vivo via vaccination with agpulsed ex vivo generated tolerogenic dcs for that reason tolerogenic dcs are an important promising therapeutic tool dendritic cells dcs were first discovered and described in 1973 by ralph m steinman they represent a bridge between innate and adaptive immunity and play a key role in the regulation of initiation of immune responses dcs populate almost all body surfaces and they do not kill the pathogens directly they utilize and subsequently degrade antigens to peptides by their proteolytic activity after that they present these peptides in complexes together with their mhc molecules on their cell surface dcs are also the only cell type which can activate naive t cells and induce antigenspecific immune responsestherefore their role is crucially important in balance between tolerance and immune response tolerogenic dcs are essential in maintenance of central and peripheral tolerance through induction of t cell clonal deletion t cell anergy and generation and activation of regulatory t treg cells for that reason tolerogenic dcs are possible candidates for specific cellular therapy for treatment of allergic diseases autoimmune diseases eg type 1 diabetes multiple sclerosis rheumatoid arthritis or transplant rejectionstolerogenic dcs often display an immature or semimature phenotype with characteristically low expression of costimulatory eg cd80 cd86 and mhc molecules'
- 'distribution of il2 receptors cd25 cd122 cd132 on different cell populations resulting in different cells that are activated by high and low dose il2 in general high doses are immune suppressive while low doses can stimulate type 1 immunity lowdose il2 has been reported to reduce hepatitis c and b infectionil2 has been used in clinical trials for the treatment of chronic viral infections and as a booster adjuvant for vaccines the use of large doses of il2 given every 6 – 8 weeks in hiv therapy similar to its use in cancer therapy was found to be ineffective in preventing progression to an aids diagnosis in two large clinical trials published in 2009more recently low dose il2 has shown early success in modulating the immune system in disease like type 1 diabetes and vasculitis there are also promising studies looking to use low dose il2 in ischaemic heart disease il2 cannot accomplish its role as a promising immunotherapeutic agent due to significant drawbacks which are listed above some of the issues can be overcome using il2 ic they are composed of il2 and some of its monoclonal antibody mab and can potentiate biologic activity of il2 in vivo the main mechanism of this phenomenon in vivo is due to the prolongation of the cytokine halflife in circulation depending on the clone of il2 mab il2 ic can selectively stimulate either cd25high il2jes61 complexes or cd122high cells il2s4b6 il2s4b6 immune complexes have high stimulatory activity for nk cells and memory cd8 t cells and they could thus replace the conventional il2 in cancer immunotherapy on the other hand il2jes61 highly selectively stimulate regulatory t cells and they could be potentially useful for transplantations and in treatment of autoimmune diseases according to an immunology textbook il2 is particularly important historically as it is the first type i cytokine that was cloned the first type i cytokine for which a receptor component was cloned and was the first shortchain type i cytokine whose receptor structure was solved many general principles have been derived from studies of this cytokine including its being the first cytokine demonstrated to act in a growth factor – like fashion through specific highaffinity receptors analogous to the growth factors being studied by endocrinologists and biochemists 712 in the mid1960s studies reported activities in leukocyteconditioned media'
- 'the immune system during puberty and postpuberty than during the rest of a males adult life physical changes during puberty such as thymic involution also affect immunological response ecoimmunology or ecological immunology explores the relationship between the immune system of an organism and its social biotic and abiotic environment more recent ecoimmunological research has focused on host pathogen defences traditionally considered nonimmunological such as pathogen avoidance selfmedication symbiontmediated defenses and fecundity tradeoffs behavioural immunity a phrase coined by mark schaller specifically refers to psychological pathogen avoidance drivers such as disgust aroused by stimuli encountered around pathogeninfected individuals such as the smell of vomit more broadly behavioural ecological immunity has been demonstrated in multiple species for example the monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites these toxins reduce parasite growth in the offspring of the infected monarch however when uninfected monarch butterflies are forced to feed only on these toxic plants they suffer a fitness cost as reduced lifespan relative to other uninfected monarch butterflies this indicates that laying eggs on toxic plants is a costly behaviour in monarchs which has probably evolved to reduce the severity of parasite infectionsymbiontmediated defenses are also heritable across host generations despite a nongenetic direct basis for the transmission aphids for example rely on several different symbionts for defense from key parasites and can vertically transmit their symbionts from parent to offspring therefore a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring allowing coevolution with parasites attacking the host in a way similar to traditional immunity the preserved immune tissues of extinct species such as the thylacine thylacine cynocephalus can also provide insights into their biology the study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer the immunology concerned with physiological reaction characteristic of the immune state this area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance the term has also been used by fertility clinics to address fertility problems recurrent miscarriages premature deliveries and dangerous complications such as preeclampsia list of immunologists immunomics international reviews of immunology outline of immunology history of immunology osteoimmunology'
|
+| 25 | - 'then convergence to i − a − 1 b displaystyle ia1b occurs if the magnitudes of all eigenvalues of a displaystyle a are less than 1 every bounded sequence in r n displaystyle mathbb r n has a convergent subsequence by the bolzano – weierstrass theorem if these all have the same limit then the original sequence converges to that limit if it can be shown that all of the subsequences of f displaystyle f have the same limit such as by showing that there is a unique fixed point of the transformation t displaystyle t then the initial sequence must also converge to that limit every bounded monotonic sequence in r n displaystyle mathbb r n converges to a limit this approach can also be applied to sequences that are not monotonic instead it is possible to define a function v r n → r displaystyle vmathbb r nrightarrow mathbb r such that v f n displaystyle vfn is monotonic in n displaystyle n if the v displaystyle v satisfies the conditions to be a lyapunov function then f displaystyle f is convergent lyapunovs theorem is normally stated for ordinary differential equations but can also be applied to sequences of iterates by replacing derivatives with discrete differences the basic requirements on v displaystyle v are that v f n 1 − v f n 0 displaystyle vfn1vfn0 for f n = 0 displaystyle fnneq 0 and v 0 0 displaystyle v00 or v [UNK] x 0 displaystyle dot vx0 for x = 0 displaystyle xneq 0 v x 0 displaystyle vx0 for all x = 0 displaystyle xneq 0 and v 0 0 displaystyle v00 v displaystyle v be radially unbounded so that v x displaystyle vx goes to infinity for any sequence with ‖ x ‖ displaystyle x that tends to infinityin many cases a lyapunov function of the form v x x t a x displaystyle vxxtax can be found although more complex forms are also used for delay differential equations a similar approach applies with lyapunov functions replaced by lyapunov functionals also called lyapunovkrasovskii functionals if the inequality in the condition 1 is weak lasalles invariance principle may be used to consider the convergence of sequences of functions it is necessary to define a distance between functions to replace the euclidean norm these often include convergence in the'
- 'this is a list of convexity topics by wikipedia page alpha blending the process of combining a translucent foreground color with a background color thereby producing a new blended color this is a convex combination of two colors allowing for transparency effects in computer graphics barycentric coordinates a coordinate system in which the location of a point of a simplex a triangle tetrahedron etc is specified as the center of mass or barycenter of masses placed at its vertices the coordinates are nonnegative for points in the convex hull borsuks conjecture a conjecture about the number of pieces required to cover a body with a larger diameter solved by hadwiger for the case of smooth convex bodies bond convexity a measure of the nonlinear relationship between price and yield duration of a bond to changes in interest rates the second derivative of the price of the bond with respect to interest rates a basic form of convexity in finance caratheodorys theorem convex hull if a point x of rd lies in the convex hull of a set p there is a subset of p with d1 or fewer points such that x lies in its convex hull choquet theory an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set c roughly speaking all vectors of c should appear as averages of extreme points complex convexity — extends the notion of convexity to complex numbers convex analysis the branch of mathematics devoted to the study of properties of convex functions and convex sets often with applications in convex minimization convex combination a linear combination of points where all coefficients are nonnegative and sum to 1 all convex combinations are within the convex hull of the given points convex and concave a print by escher in which many of the structures features can be seen as both convex shapes and concave impressions convex body a compact convex set in a euclidean space whose interior is nonempty convex conjugate a dual of a real functional in a vector space can be interpreted as an encoding of the convex hull of the functions epigraph in terms of its supporting hyperplanes convex curve a plane curve that lies entirely on one side of each of its supporting lines the interior of a closed convex curve is a convex set convex function a function in which the line segment between any two points on the graph of the function lies above the graph closed convex function a convex function all of whose sublevel sets are closed sets proper convex function a convex function whose effective domain is nonempty and it never attains minus infinity concave function the negative of a convex function convex geometry the branch of geometry studying'
- '##regularization is useful as it can often be used in a way such that the various symmetries of the physical system are preserved zetafunction regularization is used in conformal field theory renormalization and in fixing the critical spacetime dimension of string theory zeta function regularization is equivalent to dimensional regularization see4 however the main advantage of the zeta regularization is that it can be used whenever the dimensional regularization fails for example if there are matrices or tensors inside the calculations [UNK] i j k displaystyle epsilon ijk zetafunction regularization gives an analytic structure to any sums over an arithmetic function fn such sums are known as dirichlet series the regularized form f s [UNK] n 1 ∞ f n n − s displaystyle tilde fssum n1infty fnns converts divergences of the sum into simple poles on the complex splane in numerical calculations the zetafunction regularization is inappropriate as it is extremely slow to converge for numerical purposes a more rapidly converging sum is the exponential regularization given by f t [UNK] n 1 ∞ f n e − t n displaystyle ftsum n1infty fnetn this is sometimes called the ztransform of f where z exp−t the analytic structure of the exponential and zetaregularizations are related by expanding the exponential sum as a laurent series f t a n t n a n − 1 t n − 1 [UNK] displaystyle ftfrac antnfrac an1tn1cdots one finds that the zetaseries has the structure f s a n s − n [UNK] displaystyle tilde fsfrac ansncdots the structure of the exponential and zetaregulators are related by means of the mellin transform the one may be converted to the other by making use of the integral representation of the gamma function γ s [UNK] 0 ∞ t s − 1 e − t d t displaystyle gamma sint 0infty ts1etdt which leads to the identity γ s f s [UNK] 0 ∞ t s − 1 f t d t displaystyle gamma stilde fsint 0infty ts1ftdt relating the exponential and zetaregulators and converting poles in the splane to divergent terms in the laurent series the sum f s [UNK] n a n e − s ω n displaystyle fssum nanesomega n is sometimes called a heat kernel or a heatkernel regularized sum this name stems from the idea that the ω n'
|
+| 37 | - '##dicative adjective must also be connected by a copula some theories of syntax adopt a subjectpredicate distinction for instance a textbook phrase structure grammar typically divides an english declarative sentence s into a noun phrase np and verb phrase vp the subject np is shown in green and the predicate vp in blue languages with more flexible word order often called nonconfigurational languages are often also treated differently in phrase structure approaches on the other hand dependency grammar rejects the binary subjectpredicate division and places the finite verb as the root of the sentence the matrix predicate is marked in blue and its two arguments are in green while the predicate cannot be construed as a constituent in the formal sense it is a catena barring a discontinuity predicates and their arguments are always catenae in dependency structures some theories of grammar accept both a binary division of sentences into subject and predicate while also giving the head of the predicate a special status in such contexts the term predicator is used to refer to that head there are cases in which the semantic predicand has a syntactic function other than subject this happens in raising constructions such as the following here you is the object of the make verb phrase the head of the main clause but it is also the predicand of the subordinate think clause which has no subject 329 – 335 the term predicate is also used to refer to properties and to words or phrases which denote them this usage of the term comes from the concept of a predicate in logic in logic predicates are symbols which are interpreted as relations or functions over arguments in semantics the denotations of some linguistic expressions are analyzed along similar lines expressions which denote predicates in the semantic sense are sometimes themselves referred to as predication the seminal work of greg carlson distinguishes between types of predicates based on carlsons work predicates have been divided into the following subclasses which roughly pertain to how a predicate relates to its subject stagelevel predicates a stagelevel predicate is true of a temporal stage of its subject for example if john is hungry then he typically will eat some food his state of being hungry therefore lasts a certain amount of time and not his entire lifespan stagelevel predicates can occur in a wide range of grammatical constructions and are probably the most versatile kind of predicate individuallevel predicates an individuallevel predicate is true throughout the existence of an individual for example if john is smart this is a property that he has regardless of which particular point'
- 'that there can be exactly the same relation between two completely different objects greek philosophers such as plato and aristotle used a wider notion of analogy they saw analogy as a shared abstraction analogous objects did not share necessarily a relation but also an idea a pattern a regularity an attribute an effect or a philosophy these authors also accepted that comparisons metaphors and images allegories could be used as arguments and sometimes they called them analogies analogies should also make those abstractions easier to understand and give confidence to those who use them james francis ross in portraying analogy 1982 the first substantive examination of the topic since cajetans de nominum analogia demonstrated that analogy is a systematic and universal feature of natural languages with identifiable and lawlike characteristics which explain how the meanings of words in a sentence are interdependent on the contrary ibn taymiyya francis bacon and later john stuart mill argued that analogy is simply a special case of induction in their view analogy is an inductive inference from common known attributes to another probable common attribute which is known about only in the source of the analogy in the following form premises a is c d e f g b is c d e f conclusion b is probably g contemporary cognitive scientists use a wide notion of analogy extensionally close to that of plato and aristotle but framed by gentners 1983 structure mapping theory the same idea of mapping between source and target is used by conceptual metaphor and conceptual blending theorists structure mapping theory concerns both psychology and computer science according to this view analogy depends on the mapping or alignment of the elements of source and target the mapping takes place not only between objects but also between relations of objects and between relations of relations the whole mapping yields the assignment of a predicate or a relation to the target structure mapping theory has been applied and has found considerable confirmation in psychology it has had reasonable success in computer science and artificial intelligence see below some studies extended the approach to specific subjects such as metaphor and similarity logicians analyze how analogical reasoning is used in arguments from analogy an analogy can be stated using is to and as when representing the analogous relationship between two pairs of expressions for example smile is to mouth as wink is to eye in the field of mathematics and logic this can be formalized with colon notation to represent the relationships using single colon for ratio and double colon for equalityin the field of testing the colon notation of ratios and equality is often borrowed so that the example above might be rendered smile mouth wink eye and pronounced the same way an analogy can be the linguistic process that reduces word forms thought to break rules to more common forms that follow these rules for example'
- 'this approach can be used to cover a wide variety of semantic phenomena a lambek grammar is an elaboration of this idea that has a concatenation operator for types and several other inference rules mati pentus has shown that these still have the generative capacity of contextfree grammars for the lambek calculus there is a type concatenation operator [UNK] displaystyle star so that prim ⊆ tp prim displaystyle textprimsubseteq texttptextprim and if x y ∈ tp prim displaystyle xyin texttptextprim then x y x [UNK] y x [UNK] y ∈ tp prim displaystyle xyxbackslash yxstar yin texttptextprim the lambek calculus consists of several deduction rules which specify how type inclusion assertions can be derived in the following rules upper case roman letters stand for types upper case greek letters stand for sequences of types a sequent of the form x ← γ displaystyle xleftarrow gamma can be read a string is of type x if it consists of the concatenation of strings of each of the types in γ if a type is interpreted as a set of strings then the ← may be interpreted as [UNK] that is includes as a subset a horizontal line means that the inclusion above the line implies the one below the line the process is begun by the axiom rule which has no antecedents and just says that any type includes itself axiom x ← x displaystyle textaxiomquad over xleftarrow x the cut rule says that inclusions can be composed cut z ← δ x δ ′ x ← γ z ← δ γ δ ′ displaystyle textcutquad zleftarrow delta xdelta qquad xleftarrow gamma over zleftarrow delta gamma delta the other rules come in pairs one pair for each type construction operator each pair consisting of one rule for the operator in the target one in the source of the arrow the name of a rule consists of the operator and an arrow with the operator on the side of the arrow on which it occurs in the conclusion for an example here is a derivation of type raising which says that b a [UNK] b ← a displaystyle babackslash bleftarrow a the names of rules and the substitutions used are to the right b ← b a ← a b ← b a a b a [UNK] b ← a axioms ← z y b x a γ a δ δ ′ [UNK] ← y b x b a γ a displaystyle dfra'
|
+| 30 | - 'on february 5 2005 for its operations of a vermiculite mine in libby montana the indictment accused grace of wire fraud knowing endangerment of residents by concealing air monitoring results obstruction of justice by interfering with an environmental protection agency epa investigation violation of the clean air act providing asbestos materials to schools and local residents and conspiracy to release asbestos and cover up health problems from asbestos contamination the department of justice said 1200 residents had developed asbestosrelated diseases and some had died and there could be many more injuries and deathson june 8 2006 a federal judge dismissed the conspiracy charge of knowing endangerment because some of the defendant officials had left the company before the fiveyear statute of limitations had begun to run the wire fraud charge was dropped by prosecutors in march other prosecutions on april 2 1998 three men were indicted in a conspiracy to use homeless men for illegal asbestos removal from an aging wisconsin manufacturing plant thenus attorney general janet reno said knowingly removing asbestos improperly is criminal exploiting the homeless to do this work is cruelon december 12 2004 owners of new york asbestos abatement companies were sentenced to the longest federal jail sentences for environmental crimes in us history after they were convicted on 18 counts of conspiracy to violate the clean air act and the toxic substances control act and actual violations of the clean air act and racketeerinfluenced and corrupt organizations act the crimes involved a 10year scheme to illegally remove asbestos the rico counts included obstruction of justice money laundering mail fraud and bid rigging all related to the asbestos cleanupon january 11 2006 san diego gas electric co two of its employees and a contractor were indicted by a federal grand jury on charges that they violated safety standards while removing asbestos from pipes in lemon grove california the defendants were charged with five counts of conspiracy violating asbestos work practice standards and making false statements'
- 'is standard in medicalbilling terminology especially when billing for a growth whose pathology has yet to be determined epidemiology of cancer list of biological development disorders pleomorphism somatic evolution in cancer'
- 'atm these epigenetic defects occurred in various cancers including breast ovarian colorectal and head and neck cancers two or three deficiencies in expression of ercc1 xpf or pms2 occur simultaneously in the majority of the 49 colon cancers evaluated by facista et al epigenetic alterations causing reduced expression of dna repair genes is shown in a central box at the third level from the top of the figure in this section and the consequent dna repair deficiency is shown at the fourth level when expression of dna repair genes is reduced dna damages accumulate in cells at a higher than normal level and these excess damages cause increased frequencies of mutation or epimutation mutation rates strongly increase in cells defective in dna mismatch repair or in homologous recombinational repair hrrduring repair of dna double strand breaks or repair of other dna damages incompletely cleared sites of repair can cause epigenetic gene silencing dna repair deficiencies level 4 in the figure cause increased dna damages level 5 in the figure which result in increased somatic mutations and epigenetic alterations level 6 in the figure field defects normalappearing tissue with multiple alterations and discussed in the section below are common precursors to development of the disordered and improperly proliferating clone of tissue in a malignant neoplasm such field defects second level from bottom of figure may have multiple mutations and epigenetic alterations once a cancer is formed it usually has genome instability this instability is likely due to reduced dna repair or excessive dna damage because of such instability the cancer continues to evolve and to produce sub clones for example a renal cancer sampled in 9 areas had 40 ubiquitous mutations demonstrating tumor heterogeneity ie present in all areas of the cancer 59 mutations shared by some but not all areas and 29 private mutations only present in one of the areas of the cancer various other terms have been used to describe this phenomenon including field effect field cancerization and field carcinogenesis the term field cancerization was first used in 1953 to describe an area or field of epithelium that has been preconditioned by at that time largely unknown processes so as to predispose it towards development of cancer since then the terms field cancerization and field defect have been used to describe premalignant tissue in which new cancers are likely to arisefield defects are important in progression to cancer however in most cancer research as pointed out by rubin the vast majority of studies in cancer research has been done on welldefined tumors in vivo or on discrete neoplastic foci in vitro'
|
+| 2 | - 'in algebra a resolvent cubic is one of several distinct although related cubic polynomials defined from a monic polynomial of degree four p x x 4 a 3 x 3 a 2 x 2 a 1 x a 0 displaystyle pxx4a3x3a2x2a1xa0 in each case the coefficients of the resolvent cubic can be obtained from the coefficients of px using only sums subtractions and multiplications knowing the roots of the resolvent cubic of px is useful for finding the roots of px itself hence the name “ resolvent cubic ” the polynomial px has a multiple root if and only if its resolvent cubic has a multiple root suppose that the coefficients of px belong to a field k whose characteristic is different from 2 in other words we are working in a field in which 1 1 = 0 whenever roots of px are mentioned they belong to some extension k of k such that px factors into linear factors in kx if k is the field q of rational numbers then k can be the field c of complex numbers or the field q of algebraic numbers in some cases the concept of resolvent cubic is defined only when px is a quartic in depressed form — that is when a3 0 note that the fourth and fifth definitions below also make sense and that the relationship between these resolvent cubics and px are still valid if the characteristic of k is equal to 2 suppose that px is a depressed quartic — that is that a3 0 a possible definition of the resolvent cubic of px is r 1 y 8 y 3 8 a 2 y 2 2 a 2 2 − 8 a 0 y − a 1 2 displaystyle r1y8y38a2y22a228a0ya12 the origin of this definition lies in applying ferraris method to find the roots of px to be more precise p x 0 [UNK] x 4 a 2 x 2 − a 1 x − a 0 [UNK] x 2 a 2 2 2 − a 1 x − a 0 a 2 2 4 displaystyle beginalignedpx0longleftrightarrow x4a2x2a1xa0longleftrightarrow leftx2frac a22right2a1xa0frac a224endaligned add a new unknown y to x2 a22 now you have x 2 a 2 2 y 2 − a 1 x − a 0 a 2 2 4 2 x 2 y a 2 y y 2 2 y x 2 − a 1 x − a'
- 'in particular in characteristic zero all complex solutions are sought searching for the real or rational solutions are much more difficult problems that are not considered in this article the set of solutions is not always finite for example the solutions of the system x x − 1 0 x y − 1 0 displaystyle beginalignedxx10xy10endaligned are a point xy 11 and a line x 0 even when the solution set is finite there is in general no closedform expression of the solutions in the case of a single equation this is abel – ruffini theorem the barth surface shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables some of its numerous singular points are visible on the image they are the solutions of a system of 4 equations of degree 5 in 3 variables such an overdetermined system has no solution in general that is if the coefficients are not specific if it has a finite number of solutions this number is at most 53 125 by bezouts theorem however it has been shown that for the case of the singular points of a surface of degree 6 the maximum number of solutions is 65 and is reached by the barth surface a system is overdetermined if the number of equations is higher than the number of variables a system is inconsistent if it has no complex solution or if the coefficients are not complex numbers no solution in an algebraically closed field containing the coefficients by hilberts nullstellensatz this means that 1 is a linear combination with polynomials as coefficients of the first members of the equations most but not all overdetermined systems when constructed with random coefficients are inconsistent for example the system x3 – 1 0 x2 – 1 0 is overdetermined having two equations but only one unknown but it is not inconsistent since it has the solution x 1 a system is underdetermined if the number of equations is lower than the number of the variables an underdetermined system is either inconsistent or has infinitely many complex solutions or solutions in an algebraically closed field that contains the coefficients of the equations this is a nontrivial result of commutative algebra that involves in particular hilberts nullstellensatz and krulls principal ideal theorem a system is zerodimensional if it has a finite number of complex solutions or solutions in an algebraically closed field this terminology comes from the fact that the algebraic variety of the solutions has dimension zero a system with infinitely many solutions is said to be positivedimensional a zerodimensional system with as'
- '##gu endif endwhile return factors the correctness of this algorithm relies on the fact that the ring fqxf is a direct product of the fields fqxfi where fi runs on the irreducible factors of f as all these fields have qd elements the component of g in any of these fields is zero with probability q d − 1 2 q d [UNK] 1 2 displaystyle frac qd12qdsim tfrac 12 this implies that the polynomial gcdg u is the product of the factors of g for which the component of g is zero it has been shown that the average number of iterations of the while loop of the algorithm is less than 25 log 2 r displaystyle 25log 2r giving an average number of arithmetic operations in fq which is o d n 2 log r log q displaystyle odn2logrlogq in the typical case where dlogq n this complexity may be reduced to o n 2 log r log q n displaystyle on2logrlogqn by choosing h in the kernel of the linear map v → v q − v mod f displaystyle vto vqvpmod f and replacing the instruction g h q d − 1 2 − 1 mod f displaystyle ghfrac qd121pmod f by g h q − 1 2 − 1 mod f displaystyle ghfrac q121pmod f the proof of validity is the same as above replacing the direct product of the fields fqxfi by the direct product of their subfields with q elements the complexity is decomposed in o n 2 log r log q displaystyle on2logrlogq for the algorithm itself o n 2 log q n displaystyle on2logqn for the computation of the matrix of the linear map which may be already computed in the squarefree factorization and on3 for computing its kernel it may be noted that this algorithm works also if the factors have not the same degree in this case the number r of factors needed for stopping the while loop is found as the dimension of the kernel nevertheless the complexity is slightly better if squarefree factorization is done before using this algorithm as n may decrease with squarefree factorization this reduces the complexity of the critical steps victor shoups algorithm like the algorithms of the preceding section victor shoups algorithm is an equaldegree factorization algorithm unlike them it is a deterministic algorithm however it is less efficient in practice than the algorithms of preceding section for shoups algorithm the input is restricted'
|
+| 0 | - 'occupational noise is the amount of acoustic energy received by an employees auditory system when they are working in the industry occupational noise or industrial noise is often a term used in occupational safety and health as sustained exposure can cause permanent hearing damage occupational noise is considered an occupational hazard traditionally linked to loud industries such as shipbuilding mining railroad work welding and construction but can be present in any workplace where hazardous noise is present in the us the national institute for occupational safety and health niosh and the occupational safety and health administration osha work together to provide standards and regulations for noise in the workplacenational institute for occupational safety and health niosh occupational safety and health administration osha mine safety and health administration msha federal railroad administration fra have all set standards on hazardous occupational noise in their respective industries each industry is different as workers tasks and equipment differ but most regulations agree that noise becomes hazardous when it exceeds 85 decibels for an 8hour time exposure typical work shift this relationship between allotted noise level and exposure time is known as an exposure action value eav or permissible exposure limit pel the eav or pel can be seen as equations which manipulate the allotted exposure time according to the intensity of the industrial noise this equation works as an inverse exponential relationship as the industrial noise intensity increases the allotted exposure time to still remain safe decreases thus a worker exposed to a noise level of 100 decibels for 15 minutes would be at the same risk level as a worker exposed to 85 decibels for 8 hours using this mathematical relationship an employer can calculate whether or not their employees are being overexposed to noise when it is suspected that an employee will reach or exceed the pel a monitoring program for that employee should be implemented by the employerthe above calculations of pel and eav are based on measurements taken to determine the intensity of that particular industrial noise aweighted measurements are commonly used to determine noise levels that can cause harm to the human ear there are also special exposure meters available that integrate noise over a period of time to give an leq value equivalent sound pressure level defined by standards these numerical values do not fully reflect the real situation for example the osha standard sets the action level 85 dba and the pel 90 dba but in practice the compliance safety and health officer must record the excess of these values with a margin in order to take into account the potential measurement error and instead of pel 90 dba it turns out 92 dba and instead of al 85 dba its 87 dba occupational noise if experienced repeatedly at high intensity for an extended period of time can cause noiseinduce'
- 'the lowest frequency which can be localized depends on the ear distance animals with a greater ear distance can localize lower frequencies than humans can for animals with a smaller ear distance the lowest localizable frequency is higher than for humans if the ears are located at the side of the head interaural level differences appear for higher frequencies and can be evaluated for localization tasks for animals with ears at the top of the head no shadowing by the head will appear and therefore there will be much less interaural level differences which could be evaluated many of these animals can move their ears and these ear movements can be used as a lateral localization cue for many mammals there are also pronounced structures in the pinna near the entry of the ear canal as a consequence directiondependent resonances can appear which could be used as an additional localization cue similar to the localization in the median plane in the human auditory system there are additional localization cues which are also used by animals for sound localization in the median plane elevation of the sound also two detectors can be used which are positioned at different heights in animals however rough elevation information is gained simply by tilting the head provided that the sound lasts long enough to complete the movement this explains the innate behavior of cocking the head to one side when trying to localize a sound precisely to get instantaneous localization in more than two dimensions from timedifference or amplitudedifference cues requires more than two detectors the tiny parasitic fly ormia ochracea has become a model organism in sound localization experiments because of its unique ear the animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way yet it can determine the direction of sound sources with exquisite precision the tympanic membranes of opposite ears are directly connected mechanically allowing resolution of submicrosecond time differences and requiring a new neural coding strategy ho showed that the coupledeardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animals head efforts to build directional microphones based on the coupledeardrum structure are underway most owls are nocturnal or crepuscular birds of prey because they hunt at night they must rely on nonvisual senses experiments by roger payne have shown that owls are sensitive to the sounds made by their prey not the heat or the smell in fact the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched for this to work the owls must be able to accurately localize both'
- '##benmelodie in rock music from the late 1960s to the 2000s the timbre of specific sounds is important to a song for example in heavy metal music the sonic impact of the heavily amplified heavily distorted power chord played on electric guitar through very loud guitar amplifiers and rows of speaker cabinets is an essential part of the styles musical identity often listeners can identify an instrument even at different pitches and loudness in different environments and with different players in the case of the clarinet acoustic analysis shows waveforms irregular enough to suggest three instruments rather than one david luce suggests that this implies that certain strong regularities in the acoustic waveform of the above instruments must exist which are invariant with respect to the above variables however robert erickson argues that there are few regularities and they do not explain our powers of recognition and identification he suggests borrowing the concept of subjective constancy from studies of vision and visual perceptionpsychoacoustic experiments from the 1960s onwards tried to elucidate the nature of timbre one method involves playing pairs of sounds to listeners then using a multidimensional scaling algorithm to aggregate their dissimilarity judgments into a timbre space the most consistent outcomes from such experiments are that brightness or spectral energy distribution and the bite or rate and synchronicity and rise time of the attack are important factors the concept of tristimulus originates in the world of color describing the way three primary colors can be mixed together to create a given color by analogy the musical tristimulus measures the mixture of harmonics in a given sound grouped into three sections it is basically a proposal of reducing a huge number of sound partials that can amount to dozens or hundreds in some cases down to only three values the first tristimulus measures the relative weight of the first harmonic the second tristimulus measures the relative weight of the second third and fourth harmonics taken together and the third tristimulus measures the relative weight of all the remaining harmonics t 1 a 1 [UNK] h 1 h a h t 2 a 2 a 3 a 4 [UNK] h 1 h a h t 3 [UNK] h 5 h a h [UNK] h 1 h a h displaystyle t1frac a1sum h1hahqquad t2frac a2a3a4sum h1hahqquad t3frac sum h5hahsum h1hah however more evidence studies and applications would be needed regarding this type of representation in order to validate it the term brightness is also used in discussions of sound timbres in a rough analogy'
|
+| 39 | - 'waste heat is heat that is produced by a machine or other process that uses energy as a byproduct of doing work all such processes give off some waste heat as a fundamental result of the laws of thermodynamics waste heat has lower utility or in thermodynamics lexicon a lower exergy or higher entropy than the original energy source sources of waste heat include all manner of human activities natural systems and all organisms for example incandescent light bulbs get hot a refrigerator warms the room air a building gets hot during peak hours an internal combustion engine generates hightemperature exhaust gases and electronic components get warm when in operation instead of being wasted by release into the ambient environment sometimes waste heat or cold can be used by another process such as using hot engine coolant to heat a vehicle or a portion of heat that would otherwise be wasted can be reused in the same process if makeup heat is added to the system as with heat recovery ventilation in a building thermal energy storage which includes technologies both for short and longterm retention of heat or cold can create or improve the utility of waste heat or cold one example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating another is seasonal thermal energy storage stes at a foundry in sweden the heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes and is used for space heating in an adjacent factory as needed even months later an example of using stes to use natural waste heat is the drake landing solar community in alberta canada which by using a cluster of boreholes in bedrock for interseasonal heat storage obtains 97 percent of its yearround heat from solar thermal collectors on the garage roofs another stes application is storing winter cold underground for summer air conditioningon a biological scale all organisms reject waste heat as part of their metabolic processes and will die if the ambient temperature is too high to allow this anthropogenic waste heat can contribute to the urban heat island effect the biggest point sources of waste heat originate from machines such as electrical generators or industrial processes such as steel or glass production and heat loss through building envelopes the burning of transport fuels is a major contribution to waste heat machines converting energy contained in fuels to mechanical work or electric energy produce heat as a byproduct in the majority of energy applications energy is required in multiple forms these energy forms typically include some combination of heating ventilation and air conditioning mechanical energy and electric power often these additional forms of energy are produced by a heat engine running on a source of hightemperat'
- 'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'
- 'quantity thus it is useful to derive relationships between μ j t displaystyle mu mathrm jt and other more conveniently measured quantities as described below the first step in obtaining these results is to note that the joule – thomson coefficient involves the three variables t p and h a useful result is immediately obtained by applying the cyclic rule in terms of these three variables that rule may be written ∂ t ∂ p h ∂ h ∂ t p ∂ p ∂ h t − 1 displaystyle leftfrac partial tpartial prighthleftfrac partial hpartial trightpleftfrac partial ppartial hrightt1 each of the three partial derivatives in this expression has a specific meaning the first is μ j t displaystyle mu mathrm jt the second is the constant pressure heat capacity c p displaystyle cmathrm p defined by c p ∂ h ∂ t p displaystyle cmathrm p leftfrac partial hpartial trightp and the third is the inverse of the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t defined by μ t ∂ h ∂ p t displaystyle mu mathrm t leftfrac partial hpartial prightt this last quantity is more easily measured than μ j t displaystyle mu mathrm jt thus the expression from the cyclic rule becomes μ j t − μ t c p displaystyle mu mathrm jt frac mu mathrm t cp this equation can be used to obtain joule – thomson coefficients from the more easily measured isothermal joule – thomson coefficient it is used in the following to obtain a mathematical expression for the joule – thomson coefficient in terms of the volumetric properties of a fluid to proceed further the starting point is the fundamental equation of thermodynamics in terms of enthalpy this is d h t d s v d p displaystyle mathrm d htmathrm d svmathrm d p now dividing through by dp while holding temperature constant yields ∂ h ∂ p t t ∂ s ∂ p t v displaystyle leftfrac partial hpartial prightttleftfrac partial spartial prighttv the partial derivative on the left is the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t and the one on the right can be expressed in terms of the coefficient of thermal expansion via a maxwell relation the appropriate relation is ∂ s ∂ p t − ∂ v ∂ t p − v α displaystyle leftfrac partial spartial prighttleftfrac partial'
|
+| 21 | - '##agate this type of plant this means that the characteristics of a determined cultivar remain unalteredbulbs can reproduce vegetatively in a number of ways depending on the type of storage organ the plant has bulbs can be evergreen such as clivia agapanthus and some species and varieties of iris and hemerocallis however the majority are deciduous dying down to the storage organ for part of the year this characteristic has been taken advantage of in the commercialization of these plants at the beginning of the rest period the bulbs can be dug out of the ground and prepared for sale as if they remain dry they do not need any nutrition for weeks or monthsbulbous plants are produced on an industrial scale for two main markets cut flowers and dried bulbs the bulbs are produced to satisfy the demand for bulbs for parks gardens and as house plants in addition to providing the bulbs necessary for the production of cut flowers the international trade in cut flowers has a worldwide value of approximately 11000 million euros which gives an idea of the economic importance of this activity the netherlands has been the leader in commercial production since the start of the 16th century both for the dried bulb market and for cut flowers in fact with approximately 30000 hectares dedicated to this activity the production of bulbs in the netherlands represents 65 of global production the netherlands also produces 95 of the international market in bulbs dedicated to the production of cut flowers the united states is the second largest producer followed by france japan italy united kingdom israel brazil and spain international bulb society httpwwwbulbsocietyorgestablished in 1933 this society is an international educational and scientific organization it is a charity dedicated to the dissemination of information regarding the cultivation conservation and botany of all types of bulbous plants their website contains an excellent gallery of high quality photographs of bulbous plantsthe pacific bulb society httpwwwpacificbulbsocietyorgorganized in 2002 this society disseminates information and shares experiences regarding the cultivation of ornamental bulbous plants their website contains an exceptional educational resource pacific bulb society wiki with images and information regarding numerous species of bulbous plantsaustralian bulb association httpswebarchiveorgweb20090518011847httpwwwausbulbsorgindexhtmorganized in 2001 it possessed an excellent collection of photographs of bulbous plants on its website list of flower bulbs hessayon dg 1999 the bulb expert london transworld publishers mathew brian 1978 the larger bulbs london bt batsford in association with the royal horticultural society isbn 9780'
- 'soil conservation is the prevention of loss of the topmost layer of the soil from erosion or prevention of reduced fertility caused by over usage acidification salinization or other chemical soil contamination slashandburn and other unsustainable methods of subsistence farming are practiced in some lesser developed areas a consequence of deforestation is typically largescale erosion loss of soil nutrients and sometimes total desertification techniques for improved soil conservation include crop rotation cover crops conservation tillage and planted windbreaks affect both erosion and fertility when plants die they decay and become part of the soil code 330 defines standard methods recommended by the us natural resources conservation service farmers have practiced soil conservation for millennia in europe policies such as the common agricultural policy are targeting the application of best management practices such as reduced tillage winter cover crops plant residues and grass margins in order to better address soil conservation political and economic action is further required to solve the erosion problem a simple governance hurdle concerns how we value the land and this can be changed by cultural adaptation soil carbon is a carbon sink playing a role in climate change mitigation contour ploughing orients furrows following the contour lines of the farmed area furrows move left and right to maintain a constant altitude which reduces runoff contour plowing was practiced by the ancient phoenicians for slopes between two and ten percent contour plowing can increase crop yields from 10 to 50 percent partially as a result of greater soil retention terracing is the practice of creating nearly level areas in a hillside area the terraces form a series of steps each at a higher level than the previous terraces are protected from erosion by other soil barriers terraced farming is more common on small farms keyline design is the enhancement of contour farming where the total watershed properties are taken into account in forming the contour lines tree shrubs and groundcover are effective perimeter treatment for soil erosion prevention by impeding surface flows a special form of this perimeter or interrow treatment is the use of a grass way that both channels and dissipates runoff through surface friction impeding surface runoff and encouraging infiltration of the slowed surface water windbreaks are sufficiently dense rows of trees at the windward exposure of an agricultural field subject to wind erosion evergreen species provide yearround protection however as long as foliage is present in the seasons of bare soil surfaces the effect of deciduous trees may be adequate cover crops such as nitrogenfixing legumes white turnips radishes and other species are rotated with cash crops to blanket the soil yearround and act as green manure that rep'
- 'blackberries are also cultivated in the same way in a tropical climate temperatures are prone to soar above all normal levels in such cases foggersmisters are used to reduce the temperature this does not increase the humidity levels in the poly house as the evaporated droplets are almost immediately ventilated to open air hightech poly houses even have spaceheating systems as well as soilheating systems to purify the soil of unwanted viruses bacteria and other organisms the recent indoisrael collaboration at gharunda near karnal is an excellent example of polyhouse farming taking place in a developing country if developing countries were to develop a special incentive program solely for fruitandvegetable farmers especially in demographically large nations like india then the migration rate from rural to urban areas as well as the loss of horticultural and fruitvegetable farmers to urban areas may be reduced this brings a huge potential to improve the farming sector which is key to longterm economic stability the small polytunnels used by each farmer in each village promote the cultivation of vegetables both onseason and offseason and would actually help to moderate the market rate for fruit and vegetables in long run on a yearround basis and would help to satisfy local market needs for example in india the inability to grow tomatoes generates price spikes during the monsoon season this is seen as an ideal time to grow tomatoes in polytunnels since they provide the ideal climate for the crop in india the abhinav farmers club grows flowers and organic vegetables in polytunnels hoophouses have existed at least since the 1940s but they are much more commonly used with each passing decade and their design continues to evolve because of the wide variety of constantly changing designs in reality there is an entirely continuous spectrum from high tunnels through low tunnels to the simplest row covers although they are often thought about as discrete steps major themes of continuing development are 1 achieving the same results with lighter construction and less cost and 2 making hoophouses easily movable the advantages of mobile hoophouses include greater return on investment with the same unit of investment getting greater use per year across different crops in different months and more flexibility on crop rotation without ever having to bother to dig the soil out of a stationary house or use soil steam sterilization to cure greenhouse soil sickness a us department of agriculture program is helping farmers install polytunnels the program was announced at the us white house garden in december 2009farmers in iraq are building these in increasing number and adding drip irrigation to grow tomatoes'
|
+| 18 | - 'the first postage stamps those of the united kingdom had no name in 1874 the universal postal union exempted the united kingdom from its rule which stated that a countrys name had to appear on their postage stamps so a profile of the reigning monarch was all that was required for identification of the uks stamps to this day the uk remains the only country not required to name itself on its stamps for all other upu members the name must appear in latin letters many countries using nonlatin alphabets used only those on their early stamps and they remain difficult for most collectors to identify today the name chosen is typically the countrys own name for itself with a modern trend towards using simpler and shorter forms or abbreviations for instance the republic of south africa inscribes with rsa while jordan originally used the hashemite kingdom of jordan and now just jordan some countries have multiple allowed forms from which the designer may choose the most suitable the name may appear in an adjectival form as in posta romana romanian post for romania dependent territories may or may not include the name of the parent country the graphic element of a stamp design falls into one of four major categories portrait bust profile or fullface emblem coat of arms flag national symbol posthorn etc numeric a design built around the numeral of value pictorialthe use of portrait busts of the ruler or other significant person or emblems was typical of the first stamps by extension from currency which was the closest model available to the early stamp designers usage pattern has varied considerably for 60 years from 1840 to 1900 all british stamps used exactly the same portrait bust of victoria enclosed in a dizzying variety of frames while spain periodically updated the image of alfonso xiii as he grew from child to adult norway has issued stamps with the same posthorn motif for over a century changing only the details from time to time as printing technology improves while the us has placed the flag of the united states into a wide variety of settings since first using it on a stamp in the 1950s while numeral designs are eminently practical in that they emphasize the most important element of the stamp they are the exception rather than the rule by far the greatest variety of stamp design seen today is in pictorial issues the choice of image is nearly unlimited ranging from plants and animals to figures from history to landscapes to original artwork images may represent realworld objects or be allegories or abstract designs the choice of pictorial designs is governed by a combination of anniversaries required annual issues such as christmas stamps postal rate changes exhaustion of existing stamp stocks and popular demand since postal administrations are either a branch'
- '##ionism in both cases reflecting the influence of french impressionism which had spread internationally they are also known for their conceptual art as well as an internal split in the group which led to the formation of a new secession 1910 – 1914 key figures included walter leistikow franz skarbina max liebermann hermann struck and the norwegian painter edvard munch cologne 1909 – 1916 — also known as the sonderbund or the separate league of west german art lovers and artists the sonderbund westdeutscher kunstfreunde und kunstler was known for its landmark exhibitions introducing french impressionism postimpressionism and modernism to germany its 1912 show aimed to organize the most disputed paintings of our time and was later credited for helping develop a german version of expressionism while also presenting the most significant exhibition of european modernism prior to world war i the following year in fact it inspired a similar show in new york artists associated with the group included julius bretz max clarenbach august deusser walter ophey ernst osthaus egon schiele wilhelm schmurr alfred sohnrethel karli sohnrethel and otto sohnrethel along with collectors and curators of art dresden 1919 – 1925 — formed in reaction to the oppression of post world war i and the rise of the weimar republic otto schubert conrad felixmuller and otto dix are considered key figures in the dresden secession they are known for a highly accomplished form of german expressionism that was later labeled degenerate by the nazis selection was limited by availability academic art – style of painting and sculpture preraphaelite – group of english painters poets and critics founded in 1848pages displaying short descriptions of redirect targets salon des refuses art exhibition in paris first held in 1863 of works rejected by the academie des beauxarts simon hansulrich sezessionismus kunstgewerbe in literarischer und bildender kunst j b metzlersche verlagsbuchhandlung stuttgart 1976 isbn 3476002896'
- 'then still known as the vienna method was the monumental collection of 100 statistical charts gesellschaft und wirtschaft 1930 the first rule of isotype is that greater quantities are not represented by an enlarged pictogram but by a greater number of the samesized pictogram in neurath ’ s view variation in size does not allow accurate comparison what is to be compared – heightlength or area whereas repeated pictograms which always represent a fixed value within a certain chart can be counted if necessary isotype pictograms almost never depicted things in perspective in order to preserve this clarity and there were other guidelines for graphic configuration and use of colour the best exposition of isotype technique remains otto neurath ’ s book international picture language 1936 visual education was always the prime motive behind isotype which was worked out in exhibitions and books designed to inform ordinary citizens including schoolchildren about their place in the world it was never intended to replace verbal language it was a helping language always accompanied by verbal elements otto neurath realized that it could never be a fully developed language so instead he called it a “ languagelike technique ” as more requests came to the vienna museum from abroad a partner institute called mundaneum a name adopted from an abortive collaboration with paul otlet was established in 19312 to promote international work it formed branches containing small exhibitions in berlin the hague london and new york city members of the vienna team travelled periodically to the soviet union during the early 1930s in order to help set up the allunion institute of pictorial statistics of soviet construction and economy всесоюзныи институт изобразительнои статистики советского строительства и хозяиства commonly abbreviated to izostat изостат which produced statistical graphics about the five year plans among other things after the closure of the gesellschafts und wirtschaftsmuseum in 1934 neurath reidemeister and arntz fled to the netherlands where they set up the international foundation for visual education in the hague during the 1930s significant commissions were received from the us including a series of massproduced charts for the national tuberculosis association and otto neurath ’ s book modern man in the making 1939 a high point of isotype on which he reidemeister and arntz worked in close'
|
+| 5 | - 'giant stars and white and red dwarf stars could support a timeintegrated biota up to 1046 kgyears in the galaxy and 1057 kgyears in the universesuch astroecology considerations quantify the immense potentials of future life in space with commensurate biodiversity and possibly intelligence chemical analysis of carbonaceous chondrite meteorites show that they contain extractable bioavailable water organic carbon and essential phosphate nitrate and potassium nutrients the results allow assessing the soil fertilities of the parent asteroids and planets and the amounts of biomass that they can sustainlaboratory experiments showed that material from the murchison meteorite when ground into a fine powder and combined with earths water and air can provide the nutrients to support a variety of organisms including bacteria nocardia asteroides algae and plant cultures such as potato and asparagus the microorganisms used organics in the carbonaceous meteorites as the carbon source algae and plant cultures grew well also on mars meteorites because of their high bioavailable phosphate contents the martian materials achieved soil fertility ratings comparable to productive agricultural soils this offers some data relating to terraforming of marsterrestrial analogues of planetary materials are also used in such experiments for comparison and to test the effects of space conditions on microorganismsthe biomass that can be constructed from resources can be calculated by comparing the concentration of elements in the resource materials and in biomass equation 1 a given mass of resource materials mresource can support mbiomass x of biomass containing element x considering x as the limiting nutrient where cresource x is the concentration mass per unit mass of element x in the resource material and cbiomass x is its concentration in the biomass m b i o m a s s x m r e s o u r c e x c r e s o u r c e x c b i o m a s s x displaystyle mbiomassxmresourcexfrac cresourcexcbiomassx 1 assuming that 100000 kg biomass supports one human the asteroids may then sustain about 6e15 six million billion people equal to a million earths a million times the present population similar materials in the comets could support biomass and populations about one hundred times larger solar energy can sustain these populations for the predicted further five billion years of the sun these considerations yield a maximum timeintegrated biota of 3e30 kgyears in the solar system after the sun becomes a white dwarf star and other white dwarf stars can provide energy'
- 'astronomer and astrobiology pioneer gavriil adrianovich tikhov tikhov is considered to be the father of astrobotany research in the field has been conducted both with growing earth plants in space environments and searching for botanical life on other planets the first organisms in space were specially developed strains of seeds launched to 134 km 83 mi on 9 july 1946 on a us launched v2 rocket these samples were not recovered the first seeds launched into space and successfully recovered were maize seeds launched on 30 july 1946 which were soon followed by rye and cotton these early suborbital biological experiments were handled by harvard university and the naval research laboratory and were concerned with radiation exposure on living tissue in 1971 500 tree seeds loblolly pine sycamore sweetgum redwood and douglas fir were flown around the moon on apollo 14 these moon trees were planted and grown with controls back on earth where no changes were detected in 1982 the crew of the soviet salyut 7 space station conducted an experiment prepared by lithuanian scientists alfonsas merkys and others and grew some arabidopsis using fiton3 experimental microgreenhouse apparatus thus becoming the first plants to flower and produce seeds in space a skylab experiment studied the effects of gravity and light on rice plants the svet2 space greenhouse successfully achieved seed to seed plant growth in 1997 aboard space station mir bion 5 carried daucus carota and bion 7 carried maize aka corn plant research continued on the international space station biomass production system was used on the iss expedition 4 the vegetable production system veggie system was later used aboard iss plants tested in veggie before going into space included lettuce swiss chard radishes chinese cabbage and peas red romaine lettuce was grown in space on expedition 40 which were harvested when mature frozen and tested back on earth expedition 44 members became the first american astronauts to eat plants grown in space on 10 august 2015 when their crop of red romaine was harvested since 2003 russian cosmonauts have been eating half of their crop while the other half goes towards further research in 2012 a sunflower bloomed aboard the iss under the care of nasa astronaut donald pettit in january 2016 us astronauts announced that a zinnia had blossomed aboard the issin 2018 the veggie3 experiment was tested with plant pillows and root mats one of the goals is to grow food for crew consumption crops tested at this time include cabbage lettuce and mizuna plants that have been grown in space include arabidopsis thale cress bok choy tokyo bekana'
- 'the planet simulator also known as a planetary simulator is a climatecontrolled simulation chamber designed to study the origin of life the device was announced by researchers at mcmaster university on behalf of the origins institute on 4 october 2018 the simulator project begun in 2012 and was funded with 1 million from the canada foundation for innovation the ontario government and mcmaster university it was built and manufactured by angstrom engineering inc of kitchener ontariothe device was designed and developed by biophysicist maikel rheinstadter and coprincipal investigators biochemist yingfu li and astrophysicist ralph pudritz for researchers to study a theory that suggests life on early earth began in warm little ponds rather than in deep ocean vents nearly four billion years ago the device can recreate conditions of the primitive earth to see whether cellular life can be created and then later evolvein an 2018 news release maikel rheinstadter stated we want to understand how the first living cell was formed how the earth moved from a chemical world to a biological worldthe planet simulator can mimic the environmental conditions consistent on the early earth and other astronomical bodies including other planets and exoplanets by controlling temperature humidity pressure atmosphere and radiation levels within the simulation chamber according to researchers preliminary tests with the simulator under possible conditions of the early earth created protocells cells which are not living but very important nonetheless according to biologist david deamer the device is a game changer and the cells produced so far are significant the cells are not alive but are evolutionary steps toward a living system of molecules the simulator opens up a lot of experimental activities that were literally impossible before ” based on initial tests with the new simulator technology project director rheinstadter stated that it seems that the formation of life is probably a relatively frequent process in the universe'
|
+| 28 | - '##nfjgk0 if k = 1 displaystyle kneq 1 and [UNK] j 1 n a j 1 [UNK] j 1 n f j e n displaystyle sum j1naj1sum j1nfjen let a ∗ displaystyle aast denote the conjugate transpose of a then a a ∗ a ∗ a n i displaystyle aaast aast ani this implies the desired orthogonality relationship for the characters ie [UNK] k 1 n f k ∗ g i f k g j n δ i j displaystyle sum k1nfkgifkgjndelta ij where δ i j displaystyle delta ij is the kronecker delta and f k ∗ g i displaystyle fkgi is the complex conjugate of f k g i displaystyle fkgi pontryagin duality'
- 'j x p i ν p i − 1 [UNK] j i 1 ω x p j ν p j x [UNK] i 1 ω x ν p i x p i x x [UNK] p prime p [UNK] x ν p x p displaystyle dxsum i1omega xleftnu pixleftprod j1i1pjnu pjxrightpinu pi1leftprod ji1omega xpjnu pjxrightrightsum i1omega xfrac nu pixpixxsum stackrel pmid xptext primefrac nu pxp where ωx a prime omega function is the number of distinct prime factors in x and νpx is the padic valuation of x for example d 60 d 2 2 ⋅ 3 ⋅ 5 2 2 1 3 1 5 ⋅ 60 92 displaystyle d60d22cdot 3cdot 5leftfrac 22frac 13frac 15rightcdot 6092 or d 81 d 3 4 4 ⋅ 3 3 ⋅ d 3 4 ⋅ 27 ⋅ 1 108 displaystyle d81d344cdot 33cdot d34cdot 27cdot 1108 the sequence of number derivatives for k 0 1 2 … begins sequence a003415 in the oeis 0 0 1 1 4 1 5 1 12 6 7 1 16 1 9 … displaystyle 00114151126711619ldots the logarithmic derivative ld x d x x [UNK] p prime p [UNK] x ν p x p displaystyle operatorname ld xfrac dxxsum stackrel pmid xptext primefrac nu pxp is a totally additive function ld x ⋅ y ld x ld y displaystyle operatorname ld xcdot yoperatorname ld xoperatorname ld y the arithmetic partial derivative of x displaystyle x with respect to p displaystyle p is defined as x p ′ ν p x p x displaystyle xpprime frac nu pxpx so the arithmetic derivative of x displaystyle x is given as d x [UNK] p prime p [UNK] x x p ′ displaystyle dxsum stackrel pmid xptext primexpprime an arithmetic function f displaystyle f is leibnizadditive if there is a totally multiplicative function h f displaystyle hf such that f m n f m h f n f n h f m displaystyle fmnfmhfnfnhfm for all positive integers m displaystyle m and n displaystyle n a motivation for this concept is'
- 'and every rcoloring of the integers greater than one there is a finite monochromatic subset s of these integers such that the conjecture was proven in 2003 by ernest s croot iii znams problem and primary pseudoperfect numbers are closely related to the existence of egyptian fractions of the form for instance the primary pseudoperfect number 1806 is the product of the prime numbers 2 3 7 and 43 and gives rise to the egyptian fraction 1 12 13 17 143 11806 egyptian fractions are normally defined as requiring all denominators to be distinct but this requirement can be relaxed to allow repeated denominators however this relaxed form of egyptian fractions does not allow for any number to be represented using fewer fractions as any expansion with repeated fractions can be converted to an egyptian fraction of equal or smaller length by repeated application of the replacement if k is odd or simply by replacing 1k 1k by 2k if k is even this result was first proven by takenouchi 1921 graham and jewett proved that it is similarly possible to convert expansions with repeated denominators to longer egyptian fractions via the replacement this method can lead to long expansions with large denominators such as botts 1967 had originally used this replacement technique to show that any rational number has egyptian fraction representations with arbitrarily large minimum denominators any fraction xy has an egyptian fraction representation in which the maximum denominator is bounded by and a representation with at most terms the number of terms must sometimes be at least proportional to log log y for instance this is true for the fractions in the sequence 12 23 67 4243 18061807 whose denominators form sylvesters sequence it has been conjectured that olog log y terms are always enough it is also possible to find representations in which both the maximum denominator and the number of terms are small graham 1964 characterized the numbers that can be represented by egyptian fractions in which all denominators are nth powers in particular a rational number q can be represented as an egyptian fraction with square denominators if and only if q lies in one of the two halfopen intervals martin 1999 showed that any rational number has very dense expansions using a constant fraction of the denominators up to n for any sufficiently large n engel expansion sometimes called an egyptian product is a form of egyptian fraction expansion in which each denominator is a multiple of the previous one in addition the sequence of multipliers ai is required to be nondecreasi'
|
+| 38 | - '##ken the global language system theorises that language groups are engaged in unequal competition on different levels globally using the notions of a periphery semiperiphery and a core which are concepts of the world system theory de swaan relates them to the four levels present in the hierarchy of the global language system peripheral central supercentral and hypercentralde swaan also argues that the greater the range of potential uses and users of a language the higher the tendency of an individual to move up the hierarchy in the global language system and learn a more central language thus de swaan views the learning of second languages as proceeding up rather than down the hierarchy in the sense that they learn a language that is on the next level up for instance speakers of catalan a peripheral language have to learn spanish a central language to function in their own society spain meanwhile speakers of persian a central language have to learn arabic a supercentral language to function in their region on the other hand speakers of a supercentral language have to learn the hypercentral language to function globally as is evident from the huge number of nonnative english speakersaccording to de swaan languages exist in constellations and the global language system comprises a sociological classification of languages based on their social role for their speakers the worlds languages and multilinguals are connected in a strongly ordered hierarchical pattern there are thousands of peripheral or minority languages in the world each of which are connected to one of a hundred central languages the connections and patterns between each language is what makes up the global language system the four levels of language are the peripheral central supercentral and hypercentral languages peripheral languages at the lowest level peripheral languages or minority languages form the majority of languages spoken in the world 98 of the worlds languages are peripheral languages and spoken by less than 10 of the world ’ s population unlike central languages these are languages of conversation and narration rather than reading and writing of memory and remembrance rather than record they are used by native speakers within a particular area and are in danger of becoming extinct with increasing globalisation which sees more and more speakers of peripheral languages acquiring more central languages in order to communicate with others central languages the next level constitutes about 100 central languages spoken by 95 of the worlds population and generally used in education media and administration typically they are the national and official languages of the ruling state these are the languages of record and much of what has been said and written in those languages is saved in newspaper reports minutes and proceedings stored in archives included in history books collections of the classics of folk talks and folk ways increasingly recorded on electronic media and'
- 'the common misconception that aave carries ungrammatical features or that any speaker who speaks aave are uneducated or sloppy however like all dialects aave shows consistent internal logic and grammatical complexity as explained in the following examplesthe use of done coupled with the past tense of the verb in a sentence as seen in they done used all the good ones is a persistent structural trait of aave that is shared with southern european american vernacular varieties of english although the verbal particle done also occurs in caribbean creoles its syntactic configuration and semanticpragmatic function in aave differ somewhat from its creole counterpartsin aave done occurs only in preverbal auxiliary position with past tense forms whereas it occurs with a bare verb stem eg they done go and can occur in clausefinal position in some creoles in many aspects it functions in aave like a perfect tense referring to an action completed in the recent past but it can also be used to highlight the change of state or to intensify an activity as in the sentence i done told you not to mess up it is a stable feature but it is more frequently used in southern rural versions of aave than in urban aavedouble negation is also another feature commonly found in aave referring to the marking of negation on the auxiliary verb and indefinite pronoun an example would be she aint tellin nobody which would be she isnt telling anybody in standard english another feature copula absence or the absence of is or are in certain contexts can be observed as well he workin or they going home are some examples the habitual aspect marker or the invariant be habitual be as seen in he be workin they be tryin or i be like is a typical feature of aave it is the use of the base form of the copula verb be instead of the inflected forms such as are and am this is probably the most salient grammatical trait of aave both within the community and outside of it to the point of it being a stereotype prominently figured in representations of aave especially in the mediathe link between language and identity can be stretched into a tripartite where culture becomes key the addition of culture to the way language is linked to identity blur the lines because culture can be considered an abstract concept particularly in america it is nearly impossible to pinpoint a common culture in a country filled with so many different cultures especially when many of them are several generations removed from their origins because of the racial makeup of the country it is not ideal to include all american citizens under a'
- 'patois pl same or is speech or language that is considered nonstandard although the term is not formally defined in linguistics as such patois can refer to pidgins creoles dialects or vernaculars but not commonly to jargon or slang which are vocabularybased forms of cant in colloquial usage of the term especially in france class distinctions are implied by the very meaning of the term since in french patois refers to any sociolect associated with uneducated rural classes in contrast with the dominant prestige language standard french spoken by the middle and high classes of cities or as used in literature and formal settings the acrolect the term patois comes from old french patois local or regional dialect originally meaning rough clumsy or uncultivated speech possibly from the verb patoier to treat roughly from patte paw from old low franconian patta paw sole of the foot plus the suffix ois in france and other francophone countries patois has been used to describe nonstandard french and regional languages such as picard occitan and francoprovencal since 1643 and catalan after 1700 when the king louis xiv banned its use the word assumes the view of such languages being backward countrified and unlettered thus patois being potentially considered offensive when used by outsiders jean jaures said one names patois the language of a defeated nation in france and switzerland however the term patois no longer holds any offensive connotation and has indeed become a celebrated and distinguished variant of the numerous local tonguesthe vernacular form of english spoken in jamaica is also referred to as patois or patwa it is noted especially in reference to jamaican patois from 1934 jamaican patois language comprises words of the native languages of the many ethnic and cultural groups within the caribbean including spanish portuguese chinese amerindian and english along with several african languages some islands have creole dialects influenced by their linguistic diversity french spanish arabic hebrew german dutch italian chinese vietnamese and others jamaican patois is also spoken in costa rica and french creole is spoken in caribbean countries such as trinidad and tobago and guyana in south america often these patois are popularly considered broken english or slang but cases such as jamaican patois are classified with more correctness as a creole language in fact in the francophone caribbean the analogous term for local basilectal languages is creole see also jamaican english and jamaican creole antillean creole spoken in several present or formerly french islands of the lesser antilles includes vocabulary and grammar of african and carib origin in addition to french its dialects often contain folketymological derivatives of french words for example la'
|
+| 40 | - '##2 is the invariant of rohlin1991 clifford taubes forselfdual yangmills connections on nonselfdual 4manifolds journal of differential geometry 17 1982 no 1 139 – 170 gauge theory on asymptotically periodic 4manifolds j differential geom 25 1987 no 3 363 – 430 cassons invariant and gauge theory j differential geom 31 1990 no 2 547 – 5991996 richard s hamilton forthe formation of singularities in the ricci flow surveys in differential geometry vol ii cambridge ma 1993 7 – 136 int press cambridge ma 1995 fourmanifolds with positive isotropic curvature comm anal geom 5 1997 no 1 1 – 921996 gang tian foron calabis conjecture for complex surfaces with positive first chern class invent math 101 1990 no 1 101 – 172 compactness theorems for kahlereinstein manifolds of dimension 3 and up j differential geom 35 1992 no 3 535 – 558 a mathematical theory of quantum cohomology j differential geom 42 1995 no 2 259 – 367 with yongbin ruan kahlereinstein metrics with positive scalar curvature invent math 130 1997 no 1 1 – 372001 jeff cheeger forfamilies index for manifolds with boundary superconnections and cones i families of manifolds with boundary and dirac operators j funct anal 89 1990 no 2 313 – 363 with jeanmichel bismut families index for manifolds with boundary superconnections and cones ii the chern character j funct anal 90 1990 no 2 306 – 354 with jeanmichel bismut lower bounds on ricci curvature and the almost rigidity of warped products ann of math 2 144 1996 no 1 189 – 237 with tobias colding on the structure of spaces with ricci curvature bounded below i j differential geom 46 1997 no 3 406 – 480 with tobias colding2001 yakov eliashberg forcombinatorial methods in symplectic geometry proceedings of the international congress of mathematicians vol 1 2 berkeley calif 1986 531 – 539 amer math soc providence ri 1987 classification of overtwisted contact structures on 3manifolds invent math 98 1989 no 3 623 – 6372001 michael j hopkins fornilpotence and stable homotopy theory i ann of math 2 128 1988 no 2 207 – 241 with ethan devinatz and jeffrey smith the rigid analytic period mapping lubintate space and stable homotopy theory bull amer math'
- 'this case the two metric spaces are essentially identical they are called quasiisometric if there is a quasiisometry between them a normed vector space is a vector space equipped with a norm which is a function that measures the length of vectors the norm of a vector v is typically denoted by ‖ v ‖ displaystyle lvert vrvert any normed vector space can be equipped with a metric in which the distance between two vectors x and y is given by the metric d is said to be induced by the norm ‖ ⋅ ‖ displaystyle lvert cdot rvert conversely if a metric d on a vector space x is translation invariant d x y d x a y a displaystyle dxydxaya for every x y and a in x and absolutely homogeneous d α x α y α d x y displaystyle dalpha xalpha yalpha dxy for every x and y in x and real number αthen it is the metric induced by the norm a similar relationship holds between seminorms and pseudometrics among examples of metrics induced by a norm are the metrics d1 d2 and d∞ on r 2 displaystyle mathbb r 2 which are induced by the manhattan norm the euclidean norm and the maximum norm respectively more generally the kuratowski embedding allows one to see any metric space as a subspace of a normed vector space infinitedimensional normed vector spaces particularly spaces of functions are studied in functional analysis completeness is particularly important in this context a complete normed vector space is known as a banach space an unusual property of normed vector spaces is that linear transformations between them are continuous if and only if they are lipschitz such transformations are known as bounded operators a curve in a metric space m d is a continuous function γ 0 t → m displaystyle gamma 0tto m the length of γ is measured by in general this supremum may be infinite a curve of finite length is called rectifiable suppose that the length of the curve γ is equal to the distance between its endpoints — that is its the shortest possible path between its endpoints after reparametrization by arc length γ becomes a geodesic a curve which is a distancepreserving function a geodesic is a shortest possible path between any two of its pointsa geodesic metric space is a metric space which admits a geodesic between any two of its points the spaces r 2 d 1 displaystyle mathbb r 2d1 and r 2 d 2 displaystyle mathbb r 2d2 are both geo'
- 'symmetryprotected topological spt order is a kind of order in zerotemperature quantummechanical states of matter that have a symmetry and a finite energy gap to derive the results in a mostinvariant way renormalization group methods are used leading to equivalence classes corresponding to certain fixed points the spt order has the following defining properties a distinct spt states with a given symmetry cannot be smoothly deformed into each other without a phase transition if the deformation preserves the symmetry b however they all can be smoothly deformed into the same trivial product state without a phase transition if the symmetry is broken during the deformation the above definition works for both bosonic systems and fermionic systems which leads to the notions of bosonic spt order and fermionic spt order using the notion of quantum entanglement we can say that spt states are shortrange entangled states with a symmetry by contrast for longrange entanglement see topological order which is not related to the famous epr paradox since shortrange entangled states have only trivial topological orders we may also refer the spt order as symmetry protected trivial order the boundary effective theory of a nontrivial spt state always has pure gauge anomaly or mixed gaugegravity anomaly for the symmetry group as a result the boundary of a spt state is either gapless or degenerate regardless how we cut the sample to form the boundary a gapped nondegenerate boundary is impossible for a nontrivial spt state if the boundary is a gapped degenerate state the degeneracy may be caused by spontaneous symmetry breaking andor intrinsic topological order monodromy defects in nontrivial 21d spt states carry nontrival statistics and fractional quantum numbers of the symmetry group monodromy defects are created by twisting the boundary condition along a cut by a symmetry transformation the ends of such cut are the monodromy defects for example 21d bosonic zn spt states are classified by a zn integer m one can show that n identical elementary monodromy defects in a zn spt state labeled by m will carry a total zn quantum number 2m which is not a multiple of n 21d bosonic u1 spt states have a hall conductance that is quantized as an even integer 21d bosonic so3 spt states have a quantized spin hall conductance spt states are shortrange entangled while topologically ordered states are longrange entangled both intrinsic topological order and also sp'
|
+| 4 | - 'hormone auxin which activates meristem growth alongside other mechanisms to control the relative angle of buds around the stem from a biological perspective arranging leaves as far apart as possible in any given space is favoured by natural selection as it maximises access to resources especially sunlight for photosynthesis in mathematics a dynamical system is chaotic if it is highly sensitive to initial conditions the socalled butterfly effect which requires the mathematical properties of topological mixing and dense periodic orbitsalongside fractals chaos theory ranks as an essentially universal influence on patterns in nature there is a relationship between chaos and fractals — the strange attractors in chaotic systems have a fractal dimension some cellular automata simple sets of mathematical rules that generate patterns have chaotic behaviour notably stephen wolframs rule 30vortex streets are zigzagging patterns of whirling vortices created by the unsteady separation of flow of a fluid most often air or water over obstructing objects smooth laminar flow starts to break up when the size of the obstruction or the velocity of the flow become large enough compared to the viscosity of the fluid meanders are sinuous bends in rivers or other channels which form as a fluid most often water flows around bends as soon as the path is slightly curved the size and curvature of each loop increases as helical flow drags material like sand and gravel across the river to the inside of the bend the outside of the loop is left clean and unprotected so erosion accelerates further increasing the meandering in a powerful positive feedback loop waves are disturbances that carry energy as they move mechanical waves propagate through a medium – air or water making it oscillate as they pass by wind waves are sea surface waves that create the characteristic chaotic pattern of any large body of water though their statistical behaviour can be predicted with wind wave models as waves in water or wind pass over sand they create patterns of ripples when winds blow over large bodies of sand they create dunes sometimes in extensive dune fields as in the taklamakan desert dunes may form a range of patterns including crescents very long straight lines stars domes parabolas and longitudinal or seif sword shapesbarchans or crescent dunes are produced by wind acting on desert sand the two horns of the crescent and the slip face point downwind sand blows over the upwind face which stands at about 15 degrees from the horizontal and falls onto the slip face where it accumulates up to the angle of repose of the sand which is about 35 degrees when the slip face'
- 'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'
- 'each n displaystyle n the new function is defined at the points a a h a 2 h … a n h … displaystyle aaha2hldots anhldots the fundamental theorem of calculus states that differentiation and integration are inverse operations more precisely it relates the difference quotients to the riemann sums it can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration the fundamental theorem of calculus if a function f displaystyle f is defined on a partition of the interval a b displaystyle ab b a n h displaystyle banh and if f displaystyle f is a function whose difference quotient is f displaystyle f then we have [UNK] i 0 n − 1 f a i h h 2 δ x f b − f a displaystyle sum i0n1faihh2delta xfbfa furthermore for every m 0 1 2 … n − 1 textstyle m012ldots n1 we have δ δ x [UNK] i 0 m f a i h h 2 δ x f a m h h 2 displaystyle frac delta delta xsum i0mfaihh2delta xfamhh2 this is also a prototype solution of a difference equation difference equations relate an unknown function to its difference or difference quotient and are ubiquitous in the sciences the early history of discrete calculus is the history of calculus such basic ideas as the difference quotients and the riemann sums appear implicitly or explicitly in definitions and proofs after the limit is taken however they are never to be seen again however the kirchhoffs voltage law 1847 can be expressed in terms of the onedimensional discrete exterior derivative during the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop the main contributions come from the following individuals henri poincare triangulations barycentric subdivision dual triangulation poincare lemma the first proof of the general stokes theorem and a lot more l e j brouwer simplicial approximation theorem elie cartan georges de rham the notion of differential form the exterior derivative as a coordinateindependent linear operator exactnessclosedness of forms emmy noether heinz hopf leopold vietoris walther mayer modules of chains the boundary operator chain complexes j w alexander solomon lefschetz lev pontryagin andrey kolmogorov norman steenrod eduard cech the early cochain notions hermann weyl the kirchhoff laws'
|
+| 6 | - '##ativistic degenerate matter a polytrope with index n 3 is a good model for the cores of white dwarfs of higher masses according to the equation of state of relativistic degenerate matter a polytrope with index n 3 is usually also used to model mainsequence stars like the sun at least in the radiation zone corresponding to the eddington standard model of stellar structure a polytrope with index n 5 has an infinite radius it corresponds to the simplest plausible model of a selfconsistent stellar system first studied by arthur schuster in 1883 and it has an exact solution a polytrope with index n ∞ corresponds to what is called an isothermal sphere that is an isothermal selfgravitating sphere of gas whose structure is identical to the structure of a collisionless system of stars like a globular cluster this is because for an ideal gas the temperature is proportional to ρ1n so infinite n corresponds to a constant temperaturein general as the polytropic index increases the density distribution is more heavily weighted toward the center r 0 of the body polytropic process equation of state murnaghan equation of state'
- 'together the analysis was expanded upon by alar toomre in 1964 and presented in a more general and comprehensive framework'
- 'the bidirectional reflectance distribution function brdf symbol f r ω i ω r displaystyle ftextromega textiomega textr is a function of four real variables that defines how light is reflected at an opaque surface it is employed in the optics of realworld light in computer graphics algorithms and in computer vision algorithms the function takes an incoming light direction ω i displaystyle omega texti and outgoing direction ω r displaystyle omega textr taken in a coordinate system where the surface normal n displaystyle mathbf n lies along the zaxis and returns the ratio of reflected radiance exiting along ω r displaystyle omega textr to the irradiance incident on the surface from direction ω i displaystyle omega texti each direction ω displaystyle omega is itself parameterized by azimuth angle [UNK] displaystyle phi and zenith angle θ displaystyle theta therefore the brdf as a whole is a function of 4 variables the brdf has units sr−1 with steradians sr being a unit of solid angle the brdf was first defined by fred nicodemus around 1965 the definition is where l displaystyle l is radiance or power per unit solidangleinthedirectionofaray per unit projectedareaperpendiculartotheray e displaystyle e is irradiance or power per unit surface area and θ i displaystyle theta texti is the angle between ω i displaystyle omega texti and the surface normal n displaystyle mathbf n the index i displaystyle texti indicates incident light whereas the index r displaystyle textr indicates reflected light the reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities is because irradiating light other than d e i ω i displaystyle mathrm d etextiomega texti which are of no interest for f r ω i ω r displaystyle ftextromega textiomega textr might illuminate the surface which would unintentionally affect l r ω r displaystyle ltextromega textr whereas d l r ω r displaystyle mathrm d ltextromega textr is only affected by d e i ω i displaystyle mathrm d etextiomega texti the spatially varying bidirectional reflectance distribution function svbrdf is a 6dimensional function f r ω i ω r x displaystyle ftextromega textiomega textrmathbf x where x displaystyle mathbf x describes a 2d'
|
+| 35 | - 'microbiologically induced calcium carbonate precipitation micp is a biogeochemical process that induces calcium carbonate precipitation within the soil matrix biomineralization in the form of calcium carbonate precipitation can be traced back to the precambrian period calcium carbonate can be precipitated in three polymorphic forms which in the order of their usual stabilities are calcite aragonite and vaterite the main groups of microorganisms that can induce the carbonate precipitation are photosynthetic microorganisms such as cyanobacteria and microalgae sulfatereducing bacteria and some species of microorganisms involved in nitrogen cycle several mechanisms have been identified by which bacteria can induce the calcium carbonate precipitation including urea hydrolysis denitrification sulfate production and iron reduction two different pathways or autotrophic and heterotrophic pathways through which calcium carbonate is produced have been identified there are three autotrophic pathways which all result in depletion of carbon dioxide and favouring calcium carbonate precipitation in heterotrophic pathway two metabolic cycles can be involved the nitrogen cycle and the sulfur cycle several applications of this process have been proposed such as remediation of cracks and corrosion prevention in concrete biogrout sequestration of radionuclides and heavy metals all three principal kinds of bacteria that are involved in autotrophic production of carbonate obtain carbon from gaseous or dissolved carbon dioxide these pathways include nonmethylotrophic methanogenesis anoxygenic photosynthesis and oxygenic photosynthesis nonmethylotrophic methanogenesis is carried out by methanogenic archaebacteria which use co2 and h2 in anaerobiosis to give ch4 two separate and often concurrent heterotrophic pathways that lead to calcium carbonate precipitation may occur including active and passive carbonatogenesis during active carbonatogenesis the carbonate particles are produced by ionic exchanges through the cell membrane by activation of calcium andor magnesium ionic pumps or channels probably coupled with carbonate ion production during passive carbonatogenesis two metabolic cycles can be involved the nitrogen cycle and the sulfur cycle three different pathways can be involved in the nitrogen cycle ammonification of amino acids dissimilatory reduction of nitrate and degradation of urea or uric acid in the sulfur cycle bacteria follow the dissimilatory reduction of sulfate ureolysis or degradation of urea the microbial urease catalyzes the hydrolysis of urea into ammonium and carbonate one mole of urea is hydrolyzed intracellular'
- 'brown earth is a type of soil brown earths are mostly located between 35° and 55° north of the equator the largest expanses cover western and central europe large areas of western and transuralian russia the east coast of america and eastern asia here areas of brown earth soil types are found particularly in japan korea china eastern australia and new zealand brown earths cover 45 of the land in england and wales they are common in lowland areas below 1000 feet on permeable parent material the most common vegetation types are deciduous woodland and grassland due to the reasonable natural fertility of brown earths large tracts of deciduous woodland have been cut down and the land is now used for farming they are normally located in regions with a humid temperate climate rainfall totals are moderate usually below 76 cm per year and temperatures range from 4 °c in the winter to 18 °c in the summer they are welldrained fertile soils with a ph of between 50 and 65 soils generally have three horizons the a b and c horizon horizon a is usually a brownish colour and over 20 cm in depth it is composed of mull humus well decomposed alkaline organic matter and mineral matter it is biologically active with many soil organisms and plant roots mixing the mull humus with mineral particles as a result the boundary between the a and b horizons can be illdefined in unploughed examples horizon b is mostly composed of mineral matter which has been weathered from the parent material but it often contains inclusions of more organic material carried in by organisms especially earthworms it is lighter in colour than the a horizon and is often weakly illuviated enriched with material from overlying horizons due to limited leaching only the more soluble bases are moved down through the profile horizon c is made up of the parent material which is generally permeable and non or slightly acidic for example clay loam brown earths are important because they are permeable and usually easy to work throughout the year so they are valued for agriculture they also support a much wider range of forest trees than can be found on wetter land they are freely drained soils with welldeveloped a and b horizons they often develop over relatively permeable bedrock of some kind but are also found over unconsolidated parent materials like river gravels some soil classifications include welldrained alluvial soils in the brown earths too typically the brown earths have dark brown topsoils with loamy particle sizeclasses and good structure – especially under grassland the b horizon lacks the grey colours and mottles characteristic of gley'
- 'and it is about twice the carbon content of the atmosphere or around four times larger than the human emissions of carbon between the start of the industrial revolution and 2011 further most of this carbon 1035 billion tons is stored in what is defined as the nearsurface permafrost no deeper than 3 metres 98 ft below the surface however only a fraction of this stored carbon is expected to enter the atmosphere in general the volume of permafrost in the upper 3 m of ground is expected to decrease by about 25 per 1 °c 18 °f of global warming 1283 yet even under the rcp85 scenario associated with over 4 °c 72 °f of global warming by the end of the 21st century about 5 to 15 of permafrost carbon is expected to be lost over decades and centuriesthe exact amount of carbon that will be released due to warming in a given permafrost area depends on depth of thaw carbon content within the thawed soil physical changes to the environment and microbial and vegetation activity in the soil notably estimates of carbon release alone do not fully represent the impact of permafrost thaw on climate change this is because carbon can be released through either aerobic or anaerobic respiration which results in carbon dioxide co2 or methane ch4 emissions respectively while methane lasts less than 12 years in the atmosphere its global warming potential is around 80 times larger than that of co2 over a 20year period and about 28 times larger over a 100year period while only a small fraction of permafrost carbon will enter the atmosphere as methane those emissions will cause 4070 of the total warming caused by permafrost thaw during the 21st century much of the uncertainty about the eventual extent of permafrost methane emissions is caused by the difficulty of accounting for the recently discovered abrupt thaw processes which often increase the fraction of methane emitted over carbon dioxide in comparison to the usual gradual thaw processes another factor which complicates projections of permafrost carbon emissions is the ongoing greening of the arctic as climate change warms the air and the soil the region becomes more hospitable to plants including larger shrubs and trees which could not survive there before thus the arctic is losing more and more of its tundra biomes yet it gains more plants which proceed to absorb more carbon some of the emissions caused by permafrost thaw will be offset by this increased plant growth but the exact proportion is uncertain it is considered very unlikely that this greening could offset all of the emissions from permafrost thaw during the'
|
+| 8 | - 'the enhanced avionics system or easy is an integrated modular avionics suite and cockpit display system used on dassault falcon business jets since falcon 900ex and later used in other newer falcon aircraft such as falcon 2000ex and falcon 7xeasy has been jointly developed by dassault and honeywell and is based on honeywell primus epic dassault aviation started to develop the easy flight deck concept in the mid1990s with a goal to have a much better integration of aircraft systems such as fmseasy was first integrated and certificated on falcon 900ex the first easy equipped 900ex was delivered in december 2003 honeywell primus epic base of easy was then integrated on other business jets and helicopterseasy was certified on the falcon 2000ex in june 2004 with deliveries starting shortly after falcon 7x was developed from the groundup with easy avionics in october 2008 dassault announced the launch of easy phase ii program at the annual nbaa meeting in orlando easy phase ii include several enhancements to easy such as synthetic vision system adsb out paperless charts future air navigation system fans1a using controller pilot data link communications cpdlc localizer performance with vertical guidance lpveasy phase ii was certified on falcon 900lx in june 2011 and on falcon 7x in may 2013 easy architecture is based on integrated modular avionics the processing modules are called mau modular avionics units the core operating system of easy is provided by ddci integrated modular avionics ima cockpit display system dassault falcon 7x dassault aviation'
- 'briefly before being replaced by sonne and bernard erika transmitted a vhf signal on 3033 mhz which could be received by standard ebl 3 receivers the signal was adjusted in phase between a ref point and a navigation point after processing the fug 121 displayed an angle from the beacon by using two beacons it was possible to achieve a fix however this was a problem as four receivers were required two listening to each station on smaller aircraft there was not enough space and german industry was by now having trouble supplying enough radios to the air force without adding 4 more receivers per plane the system was not deployed some sources indicate that there may have been a version called electra that operated at 250 to 300 khz but details are lacking or contradictorysonne this system transmitted on 270 – 480 khz and could be received on a fug 10 no special receiver was required as the pattern was discernable with the ear all that was required was the special charts at least 6 stations were built providing coverage from the bay of biscay to norway accuracy was reasonable during the day but errors up to 4 degrees occurred at night the allies captured the maps with resulted in the being issued to allied units because of this the allies left the sonne system alone after the war the stations were rebuilt and operated into the 1970s the system was called consol by that time mond development work was done on sonne sun to remove the night time errors this system was called mond moon work was never completed truhe this system was based on the british gee system after british units were captured the germans set up a project to clone the units the first unit was the fug 122 which allowed the reception of british gee signals units in france received these units and were able to navigate using british signals the germans then developed the concept to produce fug 123 receivers which would allow a wider turning range this allowed the germans to setup gee chains of their own further inside germany where the british gee signals were unusable there seems to have been some idea of using frequencies very close to the british frequencies to make jamming by the allies hard to do without jamming their own gee system one chain became operational around berlin fubl 1 used the lorenz landing beam system consisted of the ebl 1 and ebl 2 receivers with display device anf 2 the ebl 1 operated between 30 and 33 mhz and received the azimuth signals from a transmitter at the far end of the runway the ebl 2 operated at 38 mhz and received the two marker beacons as the aircraft approached the threshold to land the afn 2 provided the pilot with'
- 'a ground proximity warning system gpws is a system designed to alert pilots if their aircraft is in immediate danger of flying into the ground or an obstacle the united states federal aviation administration faa defines gpws as a type of terrain awareness and warning system taws more advanced systems introduced in 1996 are known as enhanced ground proximity warning systems egpws a modern type of taws in the late 1960s a series of controlled flight into terrain cfit accidents took the lives of hundreds of people a cfit accident is one where a properly functioning airplane under the control of a fully qualified and certified crew is flown into terrain water or obstacles with no apparent awareness on the part of the crewbeginning in the early 1970s a number of studies examined the occurrence of cfit accidents findings from these studies indicated that many such accidents could have been avoided if a warning device called a ground proximity warning system gpws had been used as a result of these studies and recommendations from the us national transportation safety board ntsb in 1974 the faa required all large turbine and turbojet airplanes to install tsoapproved gpws equipmentthe un international civil aviation organization icao recommended the installation of gpws in 1979c donald bateman a canadianborn engineer developed and is credited with the invention of gpwsin march 2000 the us faa amended operating rules to require that all us registered turbinepowered airplanes with six or more passenger seats exclusive of pilot and copilot seating be equipped with an faaapproved taws the mandate affects aircraft manufactured after march 29 2002 prior to the development of gpws large passenger aircraft were involved in 35 fatal cfit accidents per year falling to 2 per year in the mid1970s a 2006 report stated that from 1974 when the us faa made it a requirement for large aircraft to carry such equipment until the time of the report there had not been a single passenger fatality in a cfit crash by a large jet in us airspaceafter 1974 there were still some cfit accidents that gpws was unable to help prevent due to the blind spot of those early gpws systems more advanced systems were developed older taws or deactivation of the egpws or ignoring its warnings when an airport is not in its database still leave aircraft vulnerable to possible cfit incidents in april 2010 a polish air force tupolev tu154m aircraft crashed near smolensk russia in a possible cfit accident killing all passengers and crew including the president of poland lech kaczynski the aircraft was equipped with taws made by universal avionics systems of tucson according to the russian interstate aviation committee'
|
+| 12 | - 'of s m displaystyle sm for some integers m displaystyle m whose base k displaystyle k representations are close to that of n displaystyle n constantrecursive sequences can be thought of as 1 displaystyle 1 regular sequences where the base1 representation of n displaystyle n consists of n displaystyle n copies of the digit 1 displaystyle 1'
- 'the small triangles whose vertices all have different numbers are shaded in the graph each small triangle becomes a node in the new graph derived from the triangulation the small letters identify the areas eight inside the figure and area i designates the space outside of it as described previously those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph for example node d shares an edge with the outer area i and its vertices all have different numbers so it is also shaded node b is not shaded because two vertices have the same number but it is joined to the outer area one could add a new fullnumbered triangle say by inserting a node numbered 3 into the edge between 1 and 1 of node a and joining that node to the other vertex of a doing so would have to create a pair of new nodes like the situation with nodes f and g suppose there is a ddimensional simplex of sidelength n and it is triangulated into subsimplices of sidelength 1 there is a function that given any vertex of the triangulation returns its color the coloring is guaranteed to satisfy sperners boundary condition how many times do we have to call the function in order to find a rainbow simplex obviously we can go over all the triangulation vertices whose number is ond which is polynomial in n when the dimension is fixed but can it be done in time this problem was first studied by christos papadimitriou he introduced a complexity class called ppad which contains this as well as related problems such as finding a brouwer fixed point he proved that finding a sperner simplex is ppadcomplete even for d3 some 15 years later chen and deng proved ppadcompleteness even for d2 it is believed that ppadhard problems cannot be solved in time opolylog n suppose that each vertex of the triangulation may be labeled with multiple colors so that the coloring function is f s → 2n1 for every subsimplex the set of labelings on its vertices is a setfamily over the set of colors n 1 this setfamily can be seen as a hypergraph if for every vertex v on a face of the simplex the colors in fv are a subset of the set of colors on the face endpoints then there exists a subsimplex with a balanced labeling – a labeling in which the corresponding hypergraph admits a perfect fractional matching to illustrate here are some balanced labeling examples for n 2'
- 'labeling is also odd l − v − l v displaystyle lvlv hence by tuckers lemma there are two adjacent vertices u v displaystyle uv with opposite labels assume wlog that the labels are l u 1 l v − 1 displaystyle lu1lv1 by the definition of l this means that in both g u displaystyle gu and g v displaystyle gv coordinate 1 is the largest coordinate in g u displaystyle gu this coordinate is positive while in g v displaystyle gv it is negative by the construction of the triangulation the distance between g u displaystyle gu and g v displaystyle gv is at most [UNK] displaystyle epsilon so in particular g u 1 − g v 1 g u 1 g v 1 ≤ [UNK] displaystyle gu1gv1gu1gv1leq epsilon since g u 1 displaystyle gu1 and g v 1 displaystyle gv1 have opposite signs and so g u 1 ≤ [UNK] displaystyle gu1leq epsilon but since the largest coordinate of g u displaystyle gu is coordinate 1 this means that g u k ≤ [UNK] displaystyle gukleq epsilon for each 1 ≤ k ≤ n displaystyle 1leq kleq n so g u ≤ c n [UNK] displaystyle guleq cnepsilon where c n displaystyle cn is some constant depending on n displaystyle n and the norm ⋅ displaystyle cdot which you have chosen the above is true for every [UNK] 0 displaystyle epsilon 0 since s n displaystyle sn is compact there must hence be a point u in which g u 0 displaystyle gu0 no subset of r n displaystyle mathbb r n is homeomorphic to s n displaystyle sn the ham sandwich theorem for any compact sets a1 an in r n displaystyle mathbb r n we can always find a hyperplane dividing each of them into two subsets of equal measure above we showed how to prove the borsuk – ulam theorem from tuckers lemma the converse is also true it is possible to prove tuckers lemma from the borsuk – ulam theorem therefore these two theorems are equivalent there are several fixedpoint theorems which come in three equivalent variants an algebraic topology variant a combinatorial variant and a setcovering variant each variant can be proved separately using totally different arguments but each variant can also be reduced to the other variants in its row additionally each result in the top row can be deduced from the one below it in the same column in the original theorem the domain'
|
+| 33 | - 'xenoglossy also written xenoglossia and sometimes also known as xenolalia is the supposedly paranormal phenomenon in which a person is allegedly able to speak write or understand a foreign language that they could not have acquired by natural means the term derives from the ancient greek xenos ξενος foreigner and glossa γλωσσα tongue or language the term xenoglossy was first used by french parapsychologist charles richet in 1905 claims of xenoglossy are found in the new testament and contemporary claims have been made by parapsychologists and reincarnation researchers such as ian stevenson doubts have been expressed that xenoglossy is an actual phenomenon and there is no scientifically admissible evidence supporting any of the alleged instances of xenoglossytwo types of xenoglossy are distinguished recitative xenoglossy is the use of an unacquired language incomprehensibly while responsive xenoglossy refers to the ability to intelligibly employ the unlearned language as if already acquired this phenomenon is mentioned in acts of the apostles chapter 2 at pentecost when the first disciples of jesus christ gathered together numbering one hundred and twenty and of the tongues of fire landed on each of them formalizing the coming of the spirit in an episode of inspired communication that allows the disciples to express themselves in languages other than galilean and to be understood by strangers several accounts of miraculous abilities of some people to read write speak or understand a foreign language as mentioned in the bible have been related in similar christian accounts in the middle ages similar claims were also made by some pentecostal theologians in 1901 claims of mediums speaking foreign languages were made by spiritualists in the 19th century more recent claims of xenoglossy have come from reincarnation researchers who have alleged that individuals were able to recall a language spoken in a past life some reports of xenoglossy have surfaced in the popular press such as czech speedway rider matej kus who in september 2007 supposedly awoke after a crash and was able to converse in perfect english however press reports of his fluency in english were based entirely on anecdotal stories told by his czech teammates xenoglossy has been claimed to have occurred during exorcisms canadian parapsychologist and psychiatrist at the university of virginia ian stevenson claimed there were a handful of cases that suggested evidence of xenoglossy these included two where a subject under hypnosis could'
- 'have lost but if asked directly in the context of a psychic reading whether they have such an item the client may be shocked and assume that the reader learned the information directly from the deceased loved one robert todd carroll notes in the skeptics dictionary that some would consider this to be cold reading the rainbow ruse is a crafted statement which simultaneously awards the subject a specific personality trait as well as the opposite of that trait with such a phrase a cold reader can cover all possibilities and appear to have made an accurate deduction in the mind of the subject despite the fact that a rainbow ruse statement is vague and contradictory this technique is used since personality traits are not quantifiable and also because nearly everybody has experienced both sides of a particular emotion at some time in their lives statements of this type include most of the time you are positive and cheerful but there has been a time in the past when you were very upset you are a very kind and considerate person but when somebody does something to break your trust you feel deepseated anger i would say that you are mostly shy and quiet but when the mood strikes you you can easily become the center of attentiona cold reader can choose from a variety of personality traits think of its opposite and then bind the two together in a phrase vaguely linked by factors such as mood time or potential the mentalist branch of the stagemagician community approves of reading as long as it is presented strictly as an artistic entertainment and one is not pretending to be psychicsome performers who use cold reading are honest about their use of the technique lynne kelly kari coleman ian rowland and derren brown have used these techniques at either private fortunetelling sessions or open forum talking with the dead sessions in the manner of those who claim to be genuine mediums only after receiving acclaim and applause from their audience do they reveal that they needed no psychic power for the performance only a sound knowledge of psychology and cold reading in an episode of his trick of the mind series broadcast in march 2006 derren brown showed how easily people can be influenced through cold reading techniques by repeating bertram forers famous demonstration of the personal validation fallacy or forer effect in a detailed review of four sittings conducted by medium tyler henry edward and susan gerbic reviewed all statements made by him on the tv show hollywood medium in their opinion not one statement made by henry was accurate yet each sitter felt that their reading was highly successful in interviews with each sitter after their sitting all four claimed specific statements made by henry but after reviewing the show it was shown that he had not made those statements each sit'
- 'al concluding that the ganzfeld studies have not been independently replicated and had thus failed to produce evidence for psi according to hyman reliance on metaanalysis as the sole basis for justifying the claim that an anomaly exists and that the evidence for it is consistent and replicable is fallacious it distorts what scientists mean by confirmatory evidence storm et al published a response to hyman claiming the ganzfeld experimental design has proved to be consistent and reliable but parapsychology is a struggling discipline that has not received much attention so further research on the subject is necessary rouder et al in 2013 wrote that critical evaluation of storm et als metaanalysis reveals no evidence for psi no plausible mechanism and omitted replication failuresa 2016 paper examined questionable research practices in the ganzfeld experiments and simulated how such practices could cause erroneous positive results there are several common criticisms of some or all of the ganzfeld experiments isolation – richard wiseman and others argue that not all of the studies used soundproof rooms so it is possible that when videos were playing the experimenter could have heard it and later given involuntary cues to the receiver during the selection process it could even have been possible that the receiver themselves could hear the video randomization – when subjects are asked to choose from a variety of selections there is an inherent bias to choose the first selection they are shown if the order in which they are shown the selections is randomized each time this bias will be averaged out the randomization procedures used in the experiment have been criticized for not randomizing satisfactorily the psi assumption – the assumption that any statistical deviation from chance is evidence for telepathy is highly controversial strictly speaking a deviation from chance is only evidence that either this was a rare statistically unlikely occurrence that happened by chance or something was causing a deviation from chance flaws in the experimental design are a common cause of this and so the assumption that it must be telepathy is fallaciouswriting in 1985 c e m hansel discovered weaknesses in the design and possibilities of sensory leakage in the ganzfeld experiments reported by carl sargent and other parapsychologists hansel concluded the ganzfeld studies had not been independently replicated and that esp is no nearer to being established than it was a hundred years agodavid marks in his book the psychology of the psychic 2000 has noted that during the autoganzfeld experiments the experimenter sat only fourteen feet from the senders room soundproofing tiles were eventually added but they were designed to absorb sound not to prevent transmission according to marks this was inadequate'
|
+| 22 | - 'water resources are natural resources of water that are potentially useful for humans for example as a source of drinking water supply or irrigation water 97 of the water on earth is salt water and only three percent is fresh water slightly over twothirds of this is frozen in glaciers and polar ice caps the remaining unfrozen freshwater is found mainly as groundwater with only a small fraction present above ground or in the air natural sources of fresh water include surface water under river flow groundwater and frozen water artificial sources of fresh water can include treated wastewater wastewater reuse and desalinated seawater human uses of water resources include agricultural industrial household recreational and environmental activities water resources are under threat from water scarcity water pollution water conflict and climate change fresh water is a renewable resource yet the worlds supply of groundwater is steadily decreasing with depletion occurring most prominently in asia south america and north america although it is still unclear how much natural renewal balances this usage and whether ecosystems are threatened natural sources of fresh water include surface water under river flow groundwater and frozen water surface water is water in a river lake or fresh water wetland surface water is naturally replenished by precipitation and naturally lost through discharge to the oceans evaporation evapotranspiration and groundwater recharge the only natural input to any surface water system is precipitation within its watershed the total quantity of water in that system at any given time is also dependent on many other factors these factors include storage capacity in lakes wetlands and artificial reservoirs the permeability of the soil beneath these storage bodies the runoff characteristics of the land in the watershed the timing of the precipitation and local evaporation rates all of these factors also affect the proportions of water loss humans often increase storage capacity by constructing reservoirs and decrease it by draining wetlands humans often increase runoff quantities and velocities by paving areas and channelizing the stream flow natural surface water can be augmented by importing surface water from another watershed through a canal or pipeline brazil is estimated to have the largest supply of fresh water in the world followed by russia and canada water from glaciers glacier runoff is considered to be surface water the himalayas which are often called the roof of the world contain some of the most extensive and rough high altitude areas on earth as well as the greatest area of glaciers and permafrost outside of the poles ten of asias largest rivers flow from there and more than a billion peoples livelihoods depend on them to complicate matters temperatures there are rising more rapidly than the global average in nepal the temperature has risen by 06 degrees celsius over the last decade whereas globally the earth has'
- '##ng magnitude from leftright the finite water content vadose zone flux method works with any monotonic water retention curveunsaturated hydraulic conductivity relations such as brooks and corey clapp and hornberger and van genuchtenmualem the method might work with hysteretic water retention relations these have not yet been tested the finite water content method lacks the effect of soil water diffusion this omission does not affect the accuracy of flux calculations using the method because the mean of the diffusive flux is small practically this means that the shape of the wetting front plays no role in driving the infiltration the method is thus far limited to 1d in practical applications the infiltration equation was extended to 2 and quasi3 dimensions more work remains in extending the entire method into more than one dimension the paper describing this method was selected by the early career hydrogeologists network of the international association of hydrogeologists to receive the coolest paper published in 2015 award in recognition of the potential impact of the publication on the future of hydrogeology richards equation infiltration hydrology soil moisture velocity equation'
- 'stress distribution in soil is a function of the type of soil the relative rigidity of the soil and the footing and the depth of foundation at level of contact between footing and soilthe estimation of vertical stresses at any point in a soil mass due to external loading is essential to the prediction of settlements of buildings bridges and pressure the solution to the problem of calculating the stresses in an elastic half space subjected to a vertical point load at the surface will be of value in estimating the stresses induced in a deposit of soil whose depth is large compared to the dimensions of that part of the surface that is loaded δ σ z − 3 p 2 π r 2 cos 3 θ displaystyle delta sigma zfrac 3p2pi r2cos 3theta δ σ r p 2 π r 2 − 3 cos θ sin 2 θ 1 − 2 μ 1 cos θ displaystyle delta sigma rfrac p2pi r23cos theta sin 2theta frac 12mu 1cos theta δ σ t p 2 π r 2 1 − 2 μ cos θ − 1 1 cos θ displaystyle delta sigma tfrac p2pi r212mu cos theta frac 11cos theta δ τ − 3 p 2 π r 2 cos 2 θ sin θ displaystyle delta tau frac 3p2pi r2cos 2theta sin theta cos θ z r displaystyle cos theta frac zr r r 2 z 2 displaystyle rsqrt r2z2 δ σ z − 3 p z 3 2 π r 5 − 3 p 2 π z 3 r 2 z 2 5 2 − 3 p 2 π z 2 1 r z 2 5 2 displaystyle delta sigma zfrac 3pz32pi r5frac 3p2pi frac z3r2z252frac 3p2pi z2left1leftfrac rzright2rightfrac 52 σ q 1 − 1 r z 2 1 3 2 displaystyle sigma q1frac 1frac rz2132'
|
+| 3 | - '##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'
- 'indigenous media can reference film video music digital art and sound produced and created by and for indigenous people it refers to the use of communication tools pathways and outlets by indigenous peoples for their own political and cultural purposes indigenous media is the use of modern media techniques by indigenous peoples also called fourth world peoples indigenous media helps communities in their fight against cultural extinction economic and ecological decline and forced displacement most often in the field of indigenous media the creators of the media are also the consumers together with the neighboring communities sometimes the media is also received by institutions and film festivals located far away from the production location like the american indian film festival the production is usually locally based low budget and small scale but it can also be sponsored by different support groups and governments 34 – 35 the concept of indigenous media could be extended to first world alternative media like aids activist video the research of indigenous media and the international indigenous movement in the process of globalization develop in parallel in the second half of the 20th century united nations agencies including the united nations working group on indigenous populations wgip led the movement the united nations general assembly adopted a declaration aimed at protecting the rights of indigenous peoples in 2007 the theoretical development of indigenous media research first occurred in anthropology in 1980 it was accompanied by a critical research method that diverged from postcolonialism and poststructuralism the newer method attempted to minimize the power imbalance between the researcher and the researched leading up to this ethnographic films that gave photographic techniques to locals can be traced back as far as the navajo project in 1960 the project was the pioneering work of sol worth and john adair to which the origin of a new anthropological language and style of ethnography can be attributedhowever the indigenous media movement was not a significant phenomenon for another decade the widely recognized start of the new media movement was a collaboration between american anthropologist eric michaels and australia ’ s warlpiri aboriginal broadcasting this new type of collaborative anthropological project exemplified a change from a simple observation of the life of the indigenous people to a cultural record by the indigenous people themselves following the warlpiri project the brazilian kayapo village project of vincent carelli and terence turner and the indigenous series by maori producer barry barclay in new zealand have been important milestones in the development of indigenous media however it was faye ginsburg an american anthropologist who laid the theoretical foundation for the study of indigenous media her research in 1991 expounded the faustian dilemma between technology and tribal life and inspired later indigenous media researchers the important theories of recent indigenous media studies have highlighted the dynamic relationship between local indigenous communities and their countries and globalization lorna roth'
- 'results did not predict any prejudices towards black individuals this study used emic approaches of study by conducting interviews with the locals and etic approaches by giving participants generalized personality tests exonym and endonymother explorations of the differences between reality and humans models of it blind men and an elephant emic and etic units internalism and externalism map – territory relation creswell j w 1998 qualitative enquiry and research design choosing among five traditions london uk sage dundes alan 1962 from etic to emic units in the structural study of folktales journal of american folklore 75 296 95 – 105 doi102307538171 jstor i223629 goodenough ward 1970 describing a culture description and comparison in cultural anthropology cambridge uk cambridge university press pp 104 – 119 isbn 9780202308616 harris marvin 1976 history and significance of the emicetic distinction annual review of anthropology 5 329 – 350 doi101146annurevan05100176001553 harris marvin 1980 chapter two the epistemology of cultural materialism cultural materialism the struggle for a science of culture new york random house pp 29 – 45 isbn 9780759101340 headland thomas pike kenneth harris marvin eds 1990 emics and etics the insideroutsider debate sage jahoda g 1977 y j poortinga ed in pursuit of the emicetic distinction can we ever capture it basic problems in crosscultural psychology pp 55 – 63 jardine nick 2004 etics and emics not to mention anemics and emetics in the history of the sciences history of science 42 3 261 – 278 bibcode2004hissc42261j doi101177007327530404200301 s2cid 141081973 jingfeng xia 2013 an anthropological emicetic perspective on open access practices academic search premier kitayama shinobu cohen dov 2007 handbook of cultural psychology new york guilford press kottak conrad 2006 mirror for humanity new york mcgraw hill isbn 9780078034909 nattiez jeanjacques 1987 musicologie generale et semiologue music and discourse toward a semiology of music translated by carolyn abbate isbn 9780691027142 pike kenneth lee ed 1967 language in relation to a unified theory of structure of human behavior 2nd ed the hague netherlands mouton'
|
+| 34 | - 'democratic education is a type of formal education that is organized democratically so that students can manage their own learning and participate in the governance of their school democratic education is often specifically emancipatory with the students voices being equal to the teachersthe history of democratic education spans from at least the 17th century while it is associated with a number of individuals there has been no central figure establishment or nation that advocated democratic education in 1693 john locke published some thoughts concerning education in describing the teaching of children he declares none of the things they are to learn should ever be made a burthen to them or imposd on them as a task whatever is so proposd presently becomes irksome the mind takes an aversion to it though before it were a thing of delight or indifferency let a child but be orderd to whip his top at a certain time every day whether he has or has not a mind to it let this be but requird of him as a duty wherein he must spend so many hours morning and afternoon and see whether he will not soon be weary of any play at this rate jeanjacques rousseaus book of advice on education emile was first published in 1762 emile the imaginary pupil he uses for illustration was only to learn what he could appreciate as useful he was to enjoy his lessons and learn to rely on his own judgement and experience the tutor must not lay down precepts he must let them be discovered wrote rousseau and urged him not make emile learn science but let him discover it he also said that we should not substitute books for personal experience because this does not teach us to reason it teaches us to use other peoples reasoning it teaches us to believe a great deal but never to know anything while locke and rousseau were concerned only with the education of the children of the wealthy in the 19th century leo tolstoy set up a school for peasant children this was on his own estate at yasnaya polyana russia in the late 19th century he tells us that the school evolved freely from principles introduced by teachers and pupils that in spite of the preponderating influence of the teacher the pupil had always had the right not to come to school or having come not to listen to the teacher and that the teacher had the right not to admit a pupil and was able to use all the influence he could muster to win over the community where the children were always in the majority dom sierot in 1912 janusz korczak founded dom sierot the jewish orphanage in warsaw which was run on democratic lines in 1940 dom si'
- 'is done through six points of reference learners studentsteachers in dialogue approach their acts of knowing as grounded in individual experience and circumstance learners approach the historical and cultural world as a transformable reality shaped by human ideological representations of reality learners make connections between their own conditions and the conditions produced through the making of reality learners consider the ways that they can shape this reality through their methods of knowing this new reality is collective shared and shifting learners develop literacy skills that put their ideas into print thus giving potency to the act of knowing learners identify the myths in the dominant discourse and work to destabilize these myths ending the cycle of oppression the montessori method developed by maria montessori is an example of problemposing education in an early childhood model ira shor a professor of composition and rhetoric at cuny who has worked closely with freire also advocates a problem posing model in his use of critical pedagogy he has published on the use of contract grading the physical setup of the classroom and the political aspects of student and teacher rolesjames d kirylo in his book paulo freire the man from recife reiterated freires thought and stated that a problemposing education is one where human beings are viewed as conscious beings who are unfinished yet in process of becoming other advocates of problemposing critical pedagogy include henry giroux peter mclaren and bell hooks inquirybased learning problembased learning unschooling'
- 'ambiguity tolerance – intolerance is a psychological construct that describes the relationship that individuals have with ambiguous stimuli or events individuals view these stimuli in a neutral and open way or as a threat ambiguity tolerance – intolerance is a construct that was first introduced in 1949 through the work of else frenkelbrunswik while researching ethnocentrism in children and was perpetuated by her research of ambiguity intolerance in connection to authoritarian personality it serves to define and measure how well an individual responds when presented with an event that results in ambiguous stimuli or situations in her study she tested the notion that children who are ethnically prejudiced also tend to reject ambiguity more so than their peers she studied children who ranked high and low on prejudice in a story recall test and then studied their responses to an ambiguous disc shaped figure the children who scored high in prejudice were expected to take longer to give a response to the shape less likely to make changes on their response and less likely to change their perspectives a study by kenny and ginsberg 1958 retesting frenkelbrunswiks original connection of ambiguity intolerance to ethnocentrism and authoritarian personality found that the results were unreplicable however it was discussed that this may be due to the fact that at the time the study was done incorrect methodology was used and that there lacked a concrete definition as to what the construct was most of the research on this subject was completed in the two decades after the publication of the authoritarian personality however the construct is still studied in psychological research today budner gives three examples as to what could be considered ambiguous situations a situation with no familiar cues a situation in which there are many cues to be taken into consideration and a situation in which cues suggest the existence of different structures to be adhered to there have been many attempts to conceptualize the construct of ambiguity tolerance – intolerance as to give researchers a more standard concept to work with many of these conceptualizations are based on the work of frenkelbrunswik budner 1962 defines the construct as the following intolerance of ambiguity may be defined as the tendency to perceive ie interpret ambiguous situations as sources of threat tolerance of ambiguity as the tendency to perceive ambiguous situations as desirableadditionally bochner 1965 categorized attributes given by frenkelbrunswiks theory of individuals who are intolerant to ambiguity the nine primary characteristics describe intolerance of ambiguity and are as follows need for categorization need for certainty inability to allow good and bad traits to exist in the same person'
|
+| 31 | - 'in philosophy transcendence is the basic ground concept from the words literal meaning from latin of climbing or going beyond albeit with varying connotations in its different historical and cultural stages it includes philosophies systems and approaches that describe the fundamental structures of being not as an ontology theory of being but as the framework of emergence and validation of knowledge of being these definitions are generally grounded in reason and empirical observation and seek to provide a framework for understanding the world that is not reliant on religious beliefs or supernatural forces transcendental is a word derived from the scholastic designating the extracategorical attributes of beings in religion transcendence refers to the aspect of gods nature and power which is wholly independent of the material universe beyond all physical laws this is contrasted with immanence where a god is said to be fully present in the physical world and thus accessible to creatures in various ways in religious experience transcendence is a state of being that has overcome the limitations of physical existence and by some definitions has also become independent of it this is typically manifested in prayer seance meditation psychedelics and paranormal visions it is affirmed in various religious traditions concept of the divine which contrasts with the notion of a god or the absolute that exists exclusively in the physical order immanentism or indistinguishable from it pantheism transcendence can be attributed to the divine not only in its being but also in its knowledge thus god may transcend both the universe and knowledge is beyond the grasp of the human mind although transcendence is defined as the opposite of immanence the two are not necessarily mutually exclusive some theologians and metaphysicians of various religious traditions affirm that a god is both within and beyond the universe panentheism in it but not of it simultaneously pervading it and surpassing it the ethics of baruch spinoza used the expression transcendental terms in latin termini transcendentales to indicate concepts like being thing something which are so general not to be included in the definitions of species genus and category in modern philosophy immanuel kant introduced a new term transcendental thus instituting a new third meaning in his theory of knowledge this concept is concerned with the condition of possibility of knowledge itself he also opposed the term transcendental to the term transcendent the latter meaning that which goes beyond transcends any possible knowledge of a human being for him transcendental meant knowledge about our cognitive faculty with regard to how objects are possible a priori i call all knowledge transcendental if it is occupied not with objects'
- 'atoms in molecules — collision theory — ligand field theory successor to crystal field theory — variational transitionstate theory — benson group increment theory — specific ion interaction theory climatology climate change theory general study of climate changes and anthropogenic climate change acc global warming agw theories due to human activity computer science automata theory — queueing theory cosmology big bang theory — cosmic inflation — loop quantum gravity — superstring theory — supergravity — supersymmetric theory — multiverse theory — holographic principle — quantum gravity — mtheory economics macroeconomic theory — microeconomic theory — law of supply and demand education constructivist theory — critical pedagogy theory — education theory — multiple intelligence theory — progressive education theory engineering circuit theory — control theory — signal theory — systems theory — information theory film film theory geology plate tectonics humanities critical theory jurisprudence or legal theory natural law — legal positivism — legal realism — critical legal studies law see jurisprudence also case theory linguistics xbar theory — government and binding — principles and parameters — universal grammar literature literary theory mathematics approximation theory — arakelov theory — asymptotic theory — bifurcation theory — catastrophe theory — category theory — chaos theory — choquet theory — coding theory — combinatorial game theory — computability theory — computational complexity theory — deformation theory — dimension theory — ergodic theory — field theory — galois theory — game theory — gauge theory — graph theory — group theory — hodge theory — homology theory — homotopy theory — ideal theory — intersection theory — invariant theory — iwasawa theory — ktheory — kktheory — knot theory — ltheory — lie theory — littlewood – paley theory — matrix theory — measure theory — model theory — module theory — morse theory — nevanlinna theory — number theory — obstruction theory — operator theory — order theory — pcf theory — perturbation theory — potential theory — probability theory — ramsey theory — rational choice theory — representation theory — ring theory — set theory — shape theory — small cancellation theory — spectral theory — stability theory — stable theory — sturm – liouville theory — surgery theory — twistor theory — yang – mills theory music music theory philosophy proof theory — speculative reason — theory of truth — type theory — value theory — virtue theory physics acoustic theory — antenna theory — atomic theory — bcs theory — conformal field theory — dirac hole theory — dynamo theory — landau theory — mtheory — perturbation theory — theory'
- '##ism turned this world on its head he argues for the nominalists all real being was individual or particular and universals were thus mere fictionsanother scholar victor bruno follows the same line according to bruno nominalism is one of the first signs of rupture in the medieval system the dismembering of the particulars the dangerous attribution to individuals to a status of totalization of possibilities in themselves all this will unfold in an existential fissure that is both objective and material the result of this fissure will be the essays to establish the nation state indian philosophy encompasses various realist and nominalist traditions certain orthodox hindu schools defend the realist position notably purva mimamsa nyaya and vaisheshika maintaining that the referent of the word is both the individual object perceived by the subject of knowledge and the universal class to which the thing belongs according to indian realism both the individual and the universal exist objectively with the second underlying the former buddhists take the nominalist position especially those of the sautrantika and yogacara schools they were of the opinion that words have as referent not true objects but only concepts produced in the intellect these concepts are not real since they do not have efficient existence that is causal powers words as linguistic conventions are useful to thought and discourse but even so it should not be accepted that words apprehend reality as it is dignaga formulated a nominalist theory of meaning called apohavada or theory of exclusions the theory seeks to explain how it is possible for words to refer to classes of objects even if no such class has an objective existence dignagas thesis is that classes do not refer to positive qualities that their members share in common on the contrary universal classes are exclusions apoha as such the cow class for example is composed of all exclusions common to individual cows they are all nonhorse nonelephant etc nominalism arose in reaction to the problem of universals specifically accounting for the fact that some things are of the same type for example fluffy and kitzler are both cats or the fact that certain properties are repeatable such as the grass the shirt and kermit the frog are green one wants to know by virtue of what are fluffy and kitzler both cats and what makes the grass the shirt and kermit green the platonist answer is that all the green things are green in virtue of the existence of a universal a single abstract thing that in this case is a part of all the green things with respect to the color of the grass the'
|
+| 41 | - 'along streams and rivers through parks and across commons another type is the alley normally providing access to the rear of properties or connecting builtup roads not easily reached by vehicles towpaths are another kind of urban footpath but they are often shared with cyclists a typical footpath in a park is found along the seawall in stanley park vancouver british columbia canada this is a segregated path with one lane for skaters and cyclists and the other for pedestriansin the us and canada where urban sprawl has begun to strike even the most rural communities developers and local leaders are currently striving to make their communities more conducive to nonmotorized transportation through the use of less traditional paths the robert wood johnson foundation has established the active living by design program to improve the livability of communities in part through developing trails the upper valley trails alliance has done similar work on traditional trails while the somerville community path and related paths are examples of urban initiatives in st johns newfoundland canada the grand concourse is an integrated walkway system that has over 160 kilometers 99 mi of footpaths which link every major park river pond and green space in six municipalities in london england there are several longdistance walking routes which combine footpaths and roads to link green spaces these include the capital ring london outer orbital path and the jubilee walkway the use of which have been endorsed by transport for london an alley is a narrow usually paved pedestrian path often between the walls of buildings in towns and cities this type is usually short and straight and on steep ground can consist partially or entirely of steps in older cities and towns in europe alleys are often what is left of a medieval street network or a right of way or ancient footpath similar paths also exist in some older north american towns and cities in some older urban development in north america lanes at the rear of houses to allow for deliveries and garbage collection are called alleys alleys may be paved or unpaved and a blind alley is a culdesac some alleys are roofed because they are within buildings such as the traboules of lyon or when they are a pedestrian passage through railway embankments in britain the latter follow the line of rightsof way that existed before the railway was built because of topography steps stairs are the predominant form of alley in hilly cities and towns this includes pittsburgh see steps of pittsburgh cincinnati see steps of cincinnati portland oregon seattle and san francisco in the united states as well as hong kong and rome footpaths and other rights of way have been combined and new paths created so as to produce longdistance walking routes in a number of countries these'
- 'the minot area growth through investment and cooperation fund or magic fund is a growth fund financed through a one percent sales tax in the city of minot north dakota the fund was approved by voters on may 1 1990 and the money is used for economic development capital improvements and property tax relief as of 2012 the magic fund has invested over 33 million into 200 projects in 44 communities forty percent of the one percent tax is earmarked for economic development and is used to help finance relocations startups and expansions in the minot area minot area development corporation the lead economic development agency for the city of minot targets primary sector businesses such as those in valueadded agriculture knowledgebased business and the energy industry the availability of magic funds makes minot more appealing to businesses the magic fund is very progressive in that it was one of the first growth funds in the state of north dakota and the first one to be used regionally when the magic fund was originally established it was designed to operate with minimal guidelines to allow for the high level of flexibility necessary when assembling financing and incentive packages to benefit potential businesses and the community of minot this nonrestrictive nature of the fund has been a source of some criticism though local leadership acknowledges that throughout the life of the magic fund it has been a challenge maintain openness with the public about specific spending while at the same time respecting the confidentiality of business information leaders are striving however to keep communications clearin 2005 new magic fund guidelines were set in place to clearly define “ full time ” and to require a breakdown — not an average of — salaries of proposed positions more recently in october 2008 the guidelines of the magic fund underwent public review and area residents were encouraged to offer suggestions suggestions included making magic funds available for private sector projects such as housing recreation and childcare or using the money for infrastructure purposes such as streets and sewer in order to encourage more housing projects after consideration the guidelines review committee decided to continue using magic funding for businessrelated projects the initial creation of the magic fund in may 1990 established it through 2006 and come june 2004 city voters approved an extension of the 1 city sales tax through the year 2014 the magic fund has a rich history of aiding economic development in the minot region and study after study shows the local economy has benefited drastically from its availability historically magic funds have been used in three main areas of primary sector economic development knowledgebased employment agriculture and energy five of the ten largest employers conducting business in minot today were recruited using magic funds choice hotels international was one of the first businesses to be recruited using'
- '##tes to solve problems everything promised by compact cities can be delivered'
|
+| 16 | - 'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'
- '##ythagoras contrary to popular belief most educated people in the middle ages did not believe the earth was flat this misconception is often called the myth of the flat earth as evidenced by thinkers such as thomas aquinas the european belief in a spherical earth was widespread by this point in time prior to circumnavigation of the planet and the introduction of space flight belief in a spherical earth was based on observations of the secondary effects of the earths shape and parallels drawn with the shape of other planets humans have commonly traveled for business pleasure discovery and adventure all made easier in recent human history as a result of technologies like cars trains planes and ships land navigation is an aspect of travel and refers to progressing through unfamiliar terrain using navigational tools like maps with references to terrain a compass or satellite navigation navigation on land is often facilitated by reference to landmarks – enduring and recognizable natural or artificial features that stand out from their nearby environment and are often visible from long distances natural landmarks can be characteristic features such as mountains or plateaus with examples including table mountain in south africa mount ararat in turkey the grand canyon in the united states uluru in'
- '##width extra versatility compared to the strahler number however unlike the strahler number the pathwidth is defined only for the whole graph and not separately for each node in the graph main stem of a river typically found by following the branch with the highest strahler number pfafstetter coding system'
|
+| 24 | - 'glenstone is a private contemporary art museum in potomac maryland founded in 2006 by american billionaire mitchell rales and his wife emily wei rales the museums exhibitions are drawn from a collection of about 1300 works from postworld war ii artists around the world it is the largest private contemporary art museum in the united states holding more than 46 billion in net assets and is noted for its setting in a broad natural landscape glenstones original building was designed by charles gwathmey with it being expanded several times on its 230acre 93 ha campus its most significant expansion was finished in the late 2010s with outdoor sculpture installations landscaping a new complex designed by thomas phifer and an environmental center being added glenstone has been compared to other private museums such as the frick collection and the phillips collection the museum is free to the public with it seeing over 100000 visitors in 2022 in 1986 billionaire american businessman mitchell rales purchased the property in potomac maryland to build a home starting in 1990 rales began collecting art for that home following a neardeath accident on a helicopter trip in russia rales decided to take on a philanthropic project which became the establishment of a private contemporary art museum built on land that was formerly a fox hunting club glenstone is named for the nearby glen road and because of stone quarries located in the vicinity located 15 miles 24 km from downtown washington dc the museums initial 30000squarefoot 2800 m2 modernist limestone gallery opened in 2006 and admitted visitors two days a week in its first seven years the museum admitted only 10000 visitorsthough several smaller expansions took place in the years after the museums opening the largest expansion was announced in 2013 and was completed in 2018 opening to the public on october 4 2018 with a cost of approximately 219 million the expansion increased the size of the museums gallery space by a factor of five increasing the propertys size by 130 acres 53 ha and included substantial landscaping changes with the expansion glenstone became the largest private contemporary art museum in the united states in 2019 the expansion was named as a museum opening of the year by apollowith the expansion glenstone opened to the public with free tickets available online in the year following the expansion glenstone admitted nearly 100000 visitorsin 2015 glenstone was one of several private museums questioned by the us senate finance committee over its nonprofit tax status after reporting from the new york times had questioned the validity of nonprofit tax status for institutions like glenstone which at the time welcomed very few visitors the committee sought to investigate whether highvalue individuals and families were using private museums as a form of tax shelter committee chairman senator orrin hatch said'
- 'in consistently producing organic litter is believed to be more important in reducing erosion than its direct speedreducing effects on raindrops nevertheless gardens are less effective than natural forests in erosion reduction harvesting of rice — the dominant staple of indonesia — influences the use of pekarangans in some ways production in the gardens decreases during riceharvesting season but peaks during the rest of the year lowerincome villagers benefit from the consistent productivity of starch crops in the gardens especially in a period of food shortage prerice harvest or after a failed rice harvest by droughtsettlement dynamics affect pekarangans in various ways expansion of settlements to new lands caused by population growth is the cause of the wide presence of food crops in newly made pekarangans people who resettled via the indonesian transmigration program might support plant diversity in the gardens in the places they migrate to plant species brought by internal migrants need to adapt well to the local environmentcommercialization fragmentation and urbanization are major hazards to pekarangans plant diversity these change the organic cycles within the gardens threatening their ecological sustainability commercialization requires a systemic change of crop planting to optimize and produce more crops a pekarangans owner must specialize in its crops making a small number of crops dominate the garden some owners turn them into monoculture gardens fragmentation stems from the traditional system of inheritance consequences from the reduction of plant diversity include the loss of canopy structures and organic litter resulting in less protection of the gardens soil loss of pestcontrol agents increasing the use of pesticides loss of production stability loss of nutrients diversity and the disappearance of yieldssharing culture despite urbanizations negative effect in reducing their plant diversity it increases that of the ornamental plantsa case study of home gardens in napu valley central sulawesi shows that the decrease in soil protection is caused by insufficient soil fertility management regular weeding and waste burning dumping waste in garbage pits instead of using it for compost and spread of inorganic waste the decrease of soil fertility worsens the decrease of crop diversity in the gardens products from pekarangans have multiple uses for example a coconut tree can provide food oil fuel and building materials and also be used in rituals and ceremonies the gardens plants are known for their products nutritional benefits and diversity while rice is low in vitamins a and c products from the gardens offer an abundance of them pekarangans with more perennial crops tend to create more carbohydrates and proteins and those with more annual plants tend to create more portions of vitamin a pekarangans also act as a source of fire'
- 'the german fountain turkish alman cesmesi german deutscher brunnen is a gazebo styled fountain in the northern end of old hippodrome sultanahmet square istanbul turkey and across from the mausoleum of sultan ahmed i it was constructed to commemorate the second anniversary of german emperor wilhelm iis visit to istanbul in 1898 it was built in germany then transported piece by piece and assembled in its current site in 1900 the neobyzantine style fountains octagonal dome has eight marble columns and domes interior is covered with golden mosaics the idea of great palace of constantinoples empire lodge kathisma being on the site of the german fountains conflicts with the view that carceres gates of hippodrome was found on the site of the fountain however the hypothesis of carceres gates being on the site enforces the view that quadriga of lysippos was used to stand on the site of the german fountainduring his reign as german emperor and king of prussia wilhelm ii visited several european and eastern countries his trip started in istanbul ottoman empire on 18 october 1898 during the reign of abdulhamid ii according to peter hopkirk the visit to ottoman empire was an ego trip and also had longterm motivations the emperors primary motivation for visiting was to construct the baghdad railway which would run from berlin to the persian gulf and would further connect to british india through persia this railway could provide a short and quick route from europe to asia and could carry german exports troops and artillery at the time the ottoman empire could not afford such a railway and abdulhamid ii was grateful to wilhelms offer but was suspicious over the german motives abdulhamid iis secret service believed that german archeologists in the emperors retinue were in fact geologists with designs on the oil wealth of the ottoman empire later the secret service uncovered a german report which noted that the oilfields in mosul northern mesopotamia were richer than that in the caucuses in his first visit wilhelm secured the sale of germanmade rifles to ottoman army and in his second visit he secured a promise for german companies to construct the istanbulbaghdad railway the german government constructed the german fountain for wilhelm ii and empress augustas 1898 istanbul visit according to afife batur the fountains plans were drawn by architect spitta and constructed by architect schoele also german architect carlitzik and italian architect joseph anthony worked on this projectaccording to the ottoman inscription the fountains construction started in the hejira 1319 1898 – 1899 although the inauguration of the fountain was planned to take place on 1'
|
+| 10 | - 'inhibits the growth of some harmful gramnegative and grampositive bacteria along with yeasts molds and protozoa l reuteri can secrete sufficient amounts of reuterin to inhibit the growth of harmful gut organisms without killing beneficial gut bacteria allowing l reuteri to remove gut invaders while keeping normal gut flora intactreuterin is watersoluble effective in a wide range of ph resistant to proteolytic and lipolytic enzymes and has been studied as a food preservative or auxiliary therapeutic agentreuterin as an extracted compound has been shown capable of killing escherichia coli o157h7 and listeria monocytogenes with the addition of lactic acid increasing its efficacy it has also been demonstrated to kill escherichia coli o157h7 when produced by l reuteri'
- 'thus can affect biological function of the fsl lipids in fsl kode constructs include diacyldiakyl eg dope sterols eg cholesterol ceramides one of the important functions of an fsl construct is that it can optimise the presentation of antigens both on cell surfaces and solidphase membranes this optimisation is achieved primarily by the spacer and secondarily by the lipid tail in a typical immunoassay the antigen is deposited directly onto the microplate surface and binds to the surface either in a random fashion or in a preferred orientation depending on the residues present on the surface of this antigen usually this deposition process is uncontrolled in contrast the fsl kode construct bound to a microplate presents the antigen away from the surface in an orientation with a high level of exposure to the environment furthermore typical immunoassays use recombinant peptides rather than discrete peptide antigens as the recombinant peptide is many times bigger than the epitope of interest a lot of undesired and unwanted peptide sequences are also represented on the microplate these additional sequences may include unwanted microbial related sequences as determined by a blast analysis that can cause issues of low level crossreactivity often the mechanism by which an immunoassay is able to overcome this low level activity is to dilute the serum so that the low level microbial reactive antibodies are not seen and only highlevel specific antibodies result in an interpretable result in contrast fsl kode constructs usually use specifically selected peptide fragments up to 40 amino acids thereby overcoming crossreactivity with microbial sequences and allowing for the use of undiluted serum which increases sensitivity the f component can be further enhanced by presentation of it in multimeric formats and with specific spacing the four types of multimeric format include linear repeating units linear repeating units with spacing clusters and branching fig 4 the fsl kode construct by nature of its composition in possessing both hydrophobic and hydrophilic regions are amphiphilic or amphipathic this characteristic determines the way in which the construct will interact with surfaces when present in a solution they may form simple micelles or adopt more complex bilayer structures with two simplistic examples shown in fig 5a more complex structures are expected the actual nature of fsl micelles has not been determined however based on normal structural function of micelles it is expected that it will be determined in part by the combination of functional group spacer and lipid together'
- '##n1 il1 etc which do not have a signal sequence they do not use the classical ergolgi pathway these are secreted through various nonclassical pathways at least four nonclassical unconventional protein secretion pathways have been described they include direct protein translocation across the plasma membrane likely through membrane transport proteins blebbing lysosomal secretion release via exosomes derived from multivesicular bodiesin addition proteins can be released from cells by mechanical or physiological wounding and through nonlethal transient oncotic pores in the plasma membrane induced by washing cells with serumfree media or buffers many human cell types have the ability to be secretory cells they have a welldeveloped endoplasmic reticulum and golgi apparatus to fulfill this function tissues that produce secretions include the gastrointestinal tract which secretes digestive enzymes and gastric acid the lungs which secrete surfactants and sebaceous glands which secrete sebum to lubricate the skin and hair meibomian glands in the eyelid secrete meibum to lubricate and protect the eye secretion is not unique to eukaryotes – it is also present in bacteria and archaea as well atp binding cassette abc type transporters are common to the three domains of life some secreted proteins are translocated across the cytoplasmic membrane by the secyeg translocon one of two translocation systems which requires the presence of an nterminal signal peptide on the secreted protein others are translocated across the cytoplasmic membrane by the twinarginine translocation pathway tat gramnegative bacteria have two membranes thus making secretion topologically more complex there are at least six specialized secretion systems in gramnegative bacteria many secreted proteins are particularly important in bacterial pathogenesis type i secretion is a chaperone dependent secretion system employing the hly and tol gene clusters the process begins as a leader sequence on the protein to be secreted is recognized by hlya and binds hlyb on the membrane this signal sequence is extremely specific for the abc transporter the hlyab complex stimulates hlyd which begins to uncoil and reaches the outer membrane where tolc recognizes a terminal molecule or signal on hlyd hlyd recruits tolc to the inner membrane and hlya is excreted outside of the outer membrane via a longtunnel protein channel type i secretion system transports various molecules from ions drugs to'
|
+| 1 | - 'first to form followed by the oblique shock shock diamonds are most commonly associated with jet and rocket propulsion but they can form in other systems shock diamonds can be seen during gas pipeline blowdowns because the gas is under high pressure and exits the blowdown valve at extreme speeds when artillery pieces are fired gas exits the cannon muzzle at supersonic speeds and produces a series of shock diamonds the diamonds cause a bright muzzle flash which can expose the location of gun emplacements to the enemy it was found that when the ratio between the flow pressure and atmospheric pressure is close which can be achieved with a flash suppressor the shock diamonds were greatly minimized adding a muzzle brake to the end of the muzzle balances the pressures and prevents shock diamonds 41 some radio jets powerful jets of plasma that emanate from quasars and radio galaxies are observed to have regularlyspaced knots of enhanced radio emissions 68 the jets travel at supersonic speed through a thin atmosphere of gas in space 51 so it is hypothesized that these knots are shock diamonds index of aviation articles plume hydrodynamics rocket engine nozzle'
- '##al change in location of the marker can be calculated by collecting results from a few markers the degree to which the model is flexibly yielding due to the air load can be calculated there are many different kinds of wind tunnels they are typically classified by the range of speeds that are achieved in the test section as follows lowspeed wind tunnel high speed wind tunnel subsonic and transonic wind tunnel supersonic wind tunnel hypersonic wind tunnel high enthalpy wind tunnelwind tunnels are also classified by the orientation of air flow in the test section with respect to gravity typically they are oriented horizontally as happens during level flight a different class of wind tunnels are oriented vertically so that gravity can be balanced by drag instead of lift and these have become a popular form of recreation for simulating skydiving vertical wind tunnelwind tunnels are also classified based on their main use for those used with land vehicles such as cars and trucks the type of floor aerodynamics is also important these vary from stationary floors through to full moving floors with smaller moving floors and some attempt at boundary level control also being important the main subcategories in the aeronautical wind tunnels are high reynolds number tunnels reynolds number is one of the governing similarity parameters for the simulation of flow in a wind tunnel for mach number less than 03 it is the primary parameter that governs the flow characteristics there are three main ways to simulate high reynolds number since it is not practical to obtain full scale reynolds number by use of a full scale vehicle pressurised tunnels here test gases are pressurised to increase the reynolds number heavy gas tunnels heavier gases like freon and r134a are used as test gases the transonic dynamics tunnel at nasa langley is an example of such a tunnel cryogenic tunnels here test gas is cooled down to increase the reynolds number the european transonic wind tunnel uses this technique highaltitude tunnels these are designed to test the effects of shock waves against various aircraft shapes in near vacuum in 1952 the university of california constructed the first two highaltitude wind tunnels one for testing objects at 50 to 70 miles above the earth and the second for tests at 80 to 200 miles above the earth vstol tunnels vstol tunnels require large cross section area but only small velocities since power varies with the cube of velocity the power required for the operation is also less an example of a vstol tunnel is the nasa langley 14 by 22 ft 43 by 67 m tunnel spin tunnels aircraft have a tendency to spin when they stall these tunnels are used to study that phenomenon automotive wind tunnels fall into two categories'
- 'high speed requires at least a 2dimensional treatment when all 3 spatial dimensions and perhaps the time dimension as well are important we often resort to computerized solutions of the governing equations the mach number m is defined as the ratio of the speed of an object or of a flow to the speed of sound for instance in air at room temperature the speed of sound is about 340 ms 1100 fts m can range from 0 to ∞ but this broad range falls naturally into several flow regimes these regimes are subsonic transonic supersonic hypersonic and hypervelocity flow the figure below illustrates the mach number spectrum of these flow regimes these flow regimes are not chosen arbitrarily but rather arise naturally from the strong mathematical background that underlies compressible flow see the cited reference textbooks at very slow flow speeds the speed of sound is so much faster that it is mathematically ignored and the mach number is irrelevant once the speed of the flow approaches the speed of sound however the mach number becomes allimportant and shock waves begin to appear thus the transonic regime is described by a different and much more complex mathematical treatment in the supersonic regime the flow is dominated by wave motion at oblique angles similar to the mach angle above about mach 5 these wave angles grow so small that a different mathematical approach is required defining the hypersonic speed regime finally at speeds comparable to that of planetary atmospheric entry from orbit in the range of several kms the speed of sound is now comparatively so slow that it is once again mathematically ignored in the hypervelocity regime as an object accelerates from subsonic toward supersonic speed in a gas different types of wave phenomena occur to illustrate these changes the next figure shows a stationary point m 0 that emits symmetric sound waves the speed of sound is the same in all directions in a uniform fluid so these waves are simply concentric spheres as the soundgenerating point begins to accelerate the sound waves bunch up in the direction of motion and stretch out in the opposite direction when the point reaches sonic speed m 1 it travels at the same speed as the sound waves it creates therefore an infinite number of these sound waves pile up ahead of the point forming a shock wave upon achieving supersonic flow the particle is moving so fast that it continuously leaves its sound waves behind when this occurs the locus of these waves trailing behind the point creates an angle known as the mach wave angle or mach angle μ μ arcsin a v arcsin 1 m displaystyle mu arcsin leftfrac avrightarcsin leftfrac 1mright where a displaystyle a'
|
+| 32 | - 'for producing precision lengths by stacking components which are joined temporarily in a similar fashion'
- 'this step does the preforming of green raw bodies of the mould inserts sintering by sintering the preformed green bodies are compressed and hardened in order to do this the green body is heated to a temperature below the melting temperature the sintering process consists of three phases first the volume and the porosity is reduced and secondly the open porosity is reduced in the third phase sinter necks are formed which enhance the materials strength premachining the step of premachining creates the main form of the optical insert it typically contains four process steps these steps are grinding the innerouter diameter grinding the parallelend faces of the insert grindinglapping of the fitting of insert and finally the nearnetshape grinding of the cavity normally the cavity is only premachined to a flat or a bestfit sphere grinding grinding or finishmachining creates the final form and the surface finish of the cavity in the mould insert usually the finish is carried out by grinding a subsequent polishing step is optionally required finish grinding can require several changes of the grinding tool and several truing steps of the tool finishmachining of the mould is an iterative process as long as the machined mould shows deviations from the nominal contour in the measurement step after grinding it has to be reground there is no welldefined border between premachining and fine grinding throughout the grinding process of the cavity the grain size of the tool the feed rate and the cutting depth are reduced whereas machining time increases convex surfaces are easier to manufacture the necessary steps of workpiece preparation are the mould alignment and the mould referencing grinding tool alignment grinding tool referencing and grinding tool truing also have to be done after that polishing can be necessary to remove the anisotropic structure which remains after grinding it can be performed manually or by a cncmachine coating coating is the process step in which a layer is applied on the cavity surface of the optical insert which protects the mould against wear corrosion friction sticking of glass and chemical reactions with glass for coating the surface of moulds by physical vapour deposition pvd metals are evaporated in combination with processgasbased chemicals on the tool surface highly adherent thin coatings are synthesized materials for coatings on optical inserts are platinumbased pvd mostly iridiumalloyed standard diamondlike carbon not yet commercially available sic cvd on sicceramics not yet commercially available have to be postmachined or tialn not yet commercially available to achieve a homogeneous layer thickness the'
- 'gag bennet 1974 electricity and modern physics 2nd ed edward arnold uk isbn 0713124598 is grant wr phillips manchester physics 2008 electromagnetism 2nd ed john wiley sons isbn 9780471927129 dj griffiths 2007 introduction to electrodynamics 3rd ed pearson education dorling kindersley isbn 9788177582932 lh greenberg 1978 physics with modern applications holtsaunders international wb saunders and co isbn 0721642470 jb marion wf hornyak 1984 principles of physics holtsaunders international saunders college isbn 4833701952 a beiser 1987 concepts of modern physics 4th ed mcgrawhill international isbn 0071001441 hd young ra freedman 2008 university physics – with modern physics 12th ed addisonwesley pearson international isbn 9780321501301'
|
+| 26 | - 'between roughness because due to this tangential component plastic deformation comes with a lower load than when ignoring this component a more realistic description then of the area of each single junction that is created is given by with α displaystyle alpha constant and a tangent force f → i displaystyle vec fi applied to the joint to obtain even more realistic considerations the phenomenon of the third body should also be considered ie the presence of foreign materials such as moisture oxides or lubricants between the two solids in contact a coefficient c is then introduced which is able to correlate the shear strength t of the pure material and that of the third body t t b displaystyle ttb with 0 c 1 by studying the behavior at the limits it will be that for c 0 t 0 and for c 1 it returns to the condition in which the surfaces are directly in contact and there is no presence of a third body keeping in mind what has just been said it is possible to correct the friction coefficient formula as follows in conclusion the case of elastic bodies in interaction with each other is considered similarly to what we have just seen it is possible to define an equation of the type where in this case k depends on the elastic properties of the materials also for the elastic bodies the tangential force depends on the coefficient c seen above and it will be and therefore a fairly exhaustive description of the friction coefficient can be obtained friction measurements the simplest and most immediate method for evaluating the friction coefficient of two surfaces is the use of an inclined plane on which a block of material is made to slide as can be seen in the figure the normal force of the plane is given by m g cos θ displaystyle mgcos theta while the frictional force is equal to m g sin θ displaystyle mgsin theta this allows us to state that the coefficient of friction can be calculated very easily by means of the tangent of the angle in which the block begins to slip in fact we have then from the inclined plane we moved on to more sophisticated systems which allow us to consider all the possible environmental conditions in which the measurement is made such as the crossroller machine or the pin and disk machine today there are digital machines such as the friction tester which allows by means of a software support to insert all the desired variables another widely used process is the ring compression test a flat ring of the material to be studied is plastically deformed by means of a press if the deformation is an expansion in both the inner and the outer circle then there will be low or zero friction coefficients otherwise for a deformation that expands only in'
- 'the metallurgical production of the republic of azerbaijan is considered high due to the large deposits of alunite polymetallic ores deposits of iron ore etc the metallurgy industry of azerbaijan encompasses both ferrous and nonferrous branches ferrous metallurgy includes extraction of iron smelting and refining of iron ore rolling and ferroalloys production the ferrous metallurgy production of the country started to meet the demand of oil and gas industry due to pipe production and grew further in order to improve other branches of the industry dashkasan iron ore in 4 deposits dashkesen south dashkasan hamanchay demiroglu in the valley of goshagarchay plays a key role in development of ferrous metallurgy the cities of baku sumgait and dashkesan are major centers of metallurgy in terms of extraction and processing of iron ore the sumgait piperolling plant produces drill pipes casing tubing oil and gas pipes etc bentonite clay deposits in the village of dash salakhly gazakh district is used in steel smelting baku steel company the largest metallurgical enterprise in azerbaijan was opened in 2001 on the initiative of heydar aliyev with two electric arc furnaces and three rolling lines the annual steel production capacity of company increased to 1000000 tons aluminum copper molybdenum cobalt mercury reserves and most importantly electricity for the smelting process has led to the development of nonferrous metallurgy the zeylik mine in daskasan district is the main provider of the alunite for aluminum production the extracted ore here transported through guschualabashli railway to the aluminum plant located in ganja city the obtained aluminum oxide is brought to sumgayit aluminum plant in order produce aluminum metal ganja aluminum plant produces sulfuric acid aluminum oxide and potassium fertilizer through extracted ore from zalik deposit in dashkesen aluminum oxide is also produced in sumgait azergold cjsc created by the presidential decree no 1047 on february 11 2015 implements exploration management and also extraction processing and sale of precious and nonferrous metal ore deposits located within the borders of the country in 2017 the volume of exports of precious metals carried out by this company amounted to 77340 million dollars gold mining began in gedebey in 2009 in 2016 azer gold cjsc began gold mining in the chovdar deposit in 2017 63908 kg of gold was mined which exceeded the 2016 production by 34 times gold production'
- 'the material they are most found in these are given in miller indices for simplification purposes cube component 001100 brass component 110112 copper component 112111 s component 123634 the full 3d representation of crystallographic texture is given by the orientation distribution function odf which can be achieved through evaluation of a set of pole figures or diffraction patterns subsequently all pole figures can be derived from the odf the odf is defined as the volume fraction of grains with a certain orientation g displaystyle boldsymbol g odf g 1 v d v g d g displaystyle textodfboldsymbol gfrac 1vfrac dvboldsymbol gdg the orientation g displaystyle boldsymbol g is normally identified using three euler angles the euler angles then describe the transition from the sample ’ s reference frame into the crystallographic reference frame of each individual grain of the polycrystal one thus ends up with a large set of different euler angles the distribution of which is described by the odf the orientation distribution function odf cannot be measured directly by any technique traditionally both xray diffraction and ebsd may collect pole figures different methodologies exist to obtain the odf from the pole figures or data in general they can be classified based on how they represent the odf some represent the odf as a function sum of functions or expand it in a series of harmonic functions others known as discrete methods divide the odf space in cells and focus on determining the value of the odf in each cell in wire and fiber all crystals tend to have nearly identical orientation in the axial direction but nearly random radial orientation the most familiar exceptions to this rule are fiberglass which has no crystal structure and carbon fiber in which the crystalline anisotropy is so great that a goodquality filament will be a distorted single crystal with approximately cylindrical symmetry often compared to a jelly roll singlecrystal fibers are also not uncommon the making of metal sheet often involves compression in one direction and in efficient rolling operations tension in another which can orient crystallites in both axes by a process known as grain flow however cold work destroys much of the crystalline order and the new crystallites that arise with annealing usually have a different texture control of texture is extremely important in the making of silicon steel sheet for transformer cores to reduce magnetic hysteresis and of aluminium cans since deep drawing requires extreme and relatively uniform plasticity texture in ceramics usually arises because the crystallites in a slurry'
|
+| 15 | - 'is could effectively be used as a geneediting tool in human 2pn zygotes which could lead potentially pregnancy viable if implanted the scientists used injection of cas9 protein complexed with the relevant sgrnas and homology donors into human embryos the scientists found homologous recombinationmediated alteration in hbb and g6pd the scientists also noted the limitations of their study and called for further researchin august 2017 a group of scientists from oregon published an article in nature journal detailing the successful use of crispr to edit out a mutation responsible for congenital heart disease the study looked at heterozygous mybpc3 mutation in human embryos the study claimed precise crisprcas9 and homologydirected repair response with high accuracy and precision doublestrand breaks at the mutant paternal allele were repaired using the homologous wildtype gene by modifying the cell cycle stage at which the dsb was induced they were able to avoid mosaicism which had been seen in earlier similar studies in cleaving embryos and achieve a large percentage of homozygous embryos carrying the wildtype mybpc3 gene without evidence of unintended mutations the scientists concluded that the technique may be used for the correction of mutations in human embryos the claims of this study were however pushed back on by critics who argued the evidence was overall unpersuasivein june 2018 a group of scientists published and article in nature journal indicating a potential link for edited cells having increased potential turn cancerous the scientists reported that genome editing by crisprcas9 induced dna damage response and the cell cycle stopped the study was conducted in human retinal pigment epithelial cells and the use of crispr led to a selection against cells with a functional p53 pathway the conclusion of the study would suggest that p53 inhibition might increase efficiency of human germline editing and that p53 function would need to be watched when developing crisprcas9 based therapyin november 2018 a group of chinese scientists published research in the journal molecular therapy detailing their use of crisprcas9 technology to correct a single mistaken amino acid successfully in 16 out of 18 attempts in a human embryo the unusual level of precision was achieved by the use of a base editor be system which was constructed by fusing the deaminase to the dcas9 protein the be system efficiently edits the targeted c to t or g to a without the use of a donor and without dbs formation the study focused on the fbn1 mutation that is causative for mar'
- 'by the american nurses association which provides rules regulations and guidelines to follow when making a decision that is ethical based these regulations were mainly established to help provide equal healthcare protect the rights safety and privacy of the patient and to hold nurses accountable for their actions and choices genetics can create ethical issues in nursing for a variety of different situations many scenarios questions and debates have been encountered such as what individuals can receive genetic testing or information who owns or controls the information received from the genetic test and how can the owner use that information however the code of ethics does not address genetics or genomics specifically so ethical foundations were also established to help guide genetics into health care the foundations provide a set of guidelines to understand and manage an ethical issue if one should arise and to assist in the translation of genetics into the healthcare environment'
- 'than is accurate to the population this is known as the shadow effect the cabrera vole microtus cabrerae is a small endangered rodent that belongs to the microtus genus existing primarily in portugal populations can be difficult to estimate using typical markrecapture methods due to their small size and ability to quickly disperse over large swaths of prairie land with the introduction and reduced cost of using environmental dna in this case feces were able to be used in a relatively low cost experiment to estimate the population size of the cabrera vole in southern portugal in return for sacrificing demographic age sex health information endangered species act of 1973'
|
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
-| **all** | 0.0092 |
+| **all** | 0.6909 |
## Uses
@@ -324,57 +324,57 @@ preds = model("##rch procedure that evaluates the objective function p x display
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
-| Word count | 2 | 375.0186 | 509 |
+| Word count | 1 | 370.3098 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
-| 0 | 10 |
-| 1 | 10 |
-| 2 | 10 |
-| 3 | 10 |
-| 4 | 10 |
-| 5 | 10 |
-| 6 | 10 |
-| 7 | 10 |
-| 8 | 10 |
-| 9 | 10 |
-| 10 | 10 |
-| 11 | 10 |
-| 12 | 10 |
-| 13 | 10 |
-| 14 | 10 |
-| 15 | 10 |
-| 16 | 10 |
-| 17 | 10 |
-| 18 | 10 |
-| 19 | 10 |
-| 20 | 10 |
-| 21 | 10 |
-| 22 | 10 |
-| 23 | 10 |
-| 24 | 10 |
-| 25 | 10 |
-| 26 | 10 |
-| 27 | 10 |
-| 28 | 10 |
-| 29 | 10 |
-| 30 | 10 |
-| 31 | 10 |
-| 32 | 10 |
-| 33 | 10 |
-| 34 | 10 |
-| 35 | 10 |
-| 36 | 10 |
-| 37 | 10 |
-| 38 | 10 |
-| 39 | 10 |
-| 40 | 10 |
-| 41 | 10 |
-| 42 | 10 |
+| 0 | 50 |
+| 1 | 50 |
+| 2 | 50 |
+| 3 | 50 |
+| 4 | 50 |
+| 5 | 50 |
+| 6 | 50 |
+| 7 | 50 |
+| 8 | 50 |
+| 9 | 50 |
+| 10 | 50 |
+| 11 | 50 |
+| 12 | 50 |
+| 13 | 50 |
+| 14 | 50 |
+| 15 | 50 |
+| 16 | 50 |
+| 17 | 50 |
+| 18 | 50 |
+| 19 | 50 |
+| 20 | 50 |
+| 21 | 50 |
+| 22 | 50 |
+| 23 | 50 |
+| 24 | 50 |
+| 25 | 50 |
+| 26 | 50 |
+| 27 | 50 |
+| 28 | 50 |
+| 29 | 50 |
+| 30 | 50 |
+| 31 | 50 |
+| 32 | 50 |
+| 33 | 50 |
+| 34 | 50 |
+| 35 | 50 |
+| 36 | 50 |
+| 37 | 50 |
+| 38 | 50 |
+| 39 | 50 |
+| 40 | 50 |
+| 41 | 50 |
+| 42 | 50 |
### Training Hyperparameters
- batch_size: (16, 16)
-- num_epochs: (2, 2)
+- num_epochs: (1, 4)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
@@ -383,7 +383,7 @@ preds = model("##rch procedure that evaluates the objective function p x display
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
-- end_to_end: True
+- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 512
@@ -392,12 +392,16 @@ preds = model("##rch procedure that evaluates the objective function p x display
- load_best_model_at_end: True
### Training Results
-| Epoch | Step | Training Loss | Validation Loss |
-|:------:|:----:|:-------------:|:---------------:|
-| 0.0019 | 1 | 0.2819 | - |
-| 0.9294 | 500 | 0.0065 | - |
-| 1.8587 | 1000 | 0.0049 | - |
-
+| Epoch | Step | Training Loss | Validation Loss |
+|:----------:|:--------:|:-------------:|:---------------:|
+| 0.0004 | 1 | 0.3114 | - |
+| 0.1860 | 500 | 0.0379 | - |
+| 0.3720 | 1000 | 0.1131 | - |
+| 0.5580 | 1500 | 0.0567 | - |
+| **0.7440** | **2000** | **0.0168** | **0.1033** |
+| 0.9301 | 2500 | 0.0033 | - |
+
+* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3