text
stringlengths
1
146k
Mickey was the Grand Marshal of the Tournament of Roses Parade on New Year's Day 2005. He was the first cartoon character to receive the honor and only the second fictional character after Kermit the Frog in 1996. See also Mickey Mouse Adventures, a short-lived comic starring Mickey Mouse as the protagonist Mouse Museum, a Russian museum featuring artifacts and memorabilia relating to Mickey Mouse Walt Disney (2015 PBS film) References Citations Bibliography External links Mickey Mouse on IMDb Mickey Mouse's Campaign Website (as archived on August 3, 2008) Wayne Allwine – Daily Telegraph'' obituary Mickey Mouse comic strip reprints at Creators Syndicate Mickey Mouse at Don Markstein's Toonopedia.
Archived from the original on February 17, 2016. Category:Anthropomorphic mice and rats Category:Characters in Disney package films Category:Corporate mascots Category:Disney comics characters Category:Disney core universe characters Category:Fictional anthropomorphic characters Category:Fictional characters from Calisota Category:Film characters introduced in 1928 Category:Fictional characters who use magic Category:Fictional detectives Category:Fictional mice and rats Category:Kingdom Hearts characters Category:Magazine mascots Category:Male characters in animation Category:Rodent mascots
The Stefan–Boltzmann law describes the power radiated from a black body in terms of its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body across all wavelengths per unit time (also known as the black-body radiant emittance) is directly proportional to the fourth power of the black body's thermodynamic temperature T: The constant of proportionality σ, called the Stefan–Boltzmann constant, is derived from other known physical constants. The value of the constant is where k is the Boltzmann constant, h is Planck's constant, and c is the speed of light in a vacuum.
The radiance (watts per square metre per steradian) is given by A body that does not absorb all incident radiation (sometimes known as a grey body) emits less total energy than a black body and is characterized by an emissivity, : The radiant emittance has dimensions of energy flux (energy per time per area), and the SI units of measure are joules per second per square metre, or equivalently, watts per square metre. The SI unit for absolute temperature T is the kelvin. is the emissivity of the grey body; if it is a perfect blackbody, . In the still more general (and realistic) case, the emissivity depends on the wavelength, .
To find the total power radiated from an object, multiply by its surface area, : Wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures are not subject to ray-optical limits and may be designed to exceed the Stefan–Boltzmann law. History In 1864, John Tyndall presented measurements of the infrared emission by a platinum filament and the corresponding color of the filament. The proportionality to the fourth power of the absolute temperature was deduced by Josef Stefan (1835–1893) in 1879 on the basis of Tyndall's experimental measurements, in the article Über die Beziehung zwischen der Wärmestrahlung und der Temperatur (On the relationship between thermal radiation and temperature) in the Bulletins from the sessions of the Vienna Academy of Sciences.
A derivation of the law from theoretical considerations was presented by Ludwig Boltzmann (1844–1906) in 1884, drawing upon the work of Adolfo Bartoli. Bartoli in 1876 had derived the existence of radiation pressure from the principles of thermodynamics. Following Bartoli, Boltzmann considered an ideal heat engine using electromagnetic radiation instead of an ideal gas as working matter. The law was almost immediately experimentally verified. Heinrich Weber in 1888 pointed out deviations at higher temperatures, but perfect accuracy within measurement uncertainties was confirmed up to temperatures of 1535 K by 1897. The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light, the Boltzmann constant and Planck's constant, is a direct consequence of Planck's law as formulated in 1900.
Examples Temperature of the Sun With his law Stefan also determined the temperature of the Sun's surface. He inferred from the data of Jacques-Louis Soret (1827–1890) that the energy flux density from the Sun is 29 times greater than the energy flux density of a certain warmed metal lamella (a thin plate). A round lamella was placed at such a distance from the measuring device that it would be seen at the same angle as the Sun. Soret estimated the temperature of the lamella to be approximately 1900 °C to 2000 °C. Stefan surmised that ⅓ of the energy flux from the Sun is absorbed by the Earth's atmosphere, so he took for the correct Sun's energy flux a value 3/2 times greater than Soret's value, namely 29 × 3/2 = 43.5.
Precise measurements of atmospheric absorption were not made until 1888 and 1904. The temperature Stefan obtained was a median value of previous ones, 1950 °C and the absolute thermodynamic one 2200 K. As 2.574 = 43.5, it follows from the law that the temperature of the Sun is 2.57 times greater than the temperature of the lamella, so Stefan got a value of 5430 °C or 5700 K (the modern value is 5778 K). This was the first sensible value for the temperature of the Sun. Before this, values ranging from as low as 1800 °C to as high as 13,000,000 °C were claimed.
The lower value of 1800 °C was determined by Claude Pouillet (1790–1868) in 1838 using the Dulong–Petit law. Pouillet also took just half the value of the Sun's correct energy flux. Temperature of stars The temperature of stars other than the Sun can be approximated using a similar means by treating the emitted energy as a black body radiation. So: where L is the luminosity, σ is the Stefan–Boltzmann constant, R is the stellar radius and T is the effective temperature. This same formula can be used to compute the approximate radius of a main sequence star relative to the sun: where is the solar radius, is the solar luminosity, and so forth.
With the Stefan–Boltzmann law, astronomers can easily infer the radii of stars. The law is also met in the thermodynamics of black holes in so-called Hawking radiation. Effective temperature of the Earth Similarly we can calculate the effective temperature of the Earth T⊕ by equating the energy received from the Sun and the energy radiated by the Earth, under the black-body approximation (Earth's own production of energy being small enough to be negligible). The luminosity of the Sun, L⊙, is given by: At Earth, this energy is passing through a sphere with a radius of a0, the distance between the Earth and the Sun, and the irradiance (received power per unit area) is given by The Earth has a radius of R⊕, and therefore has a cross-section of .
The radiant flux (i.e. solar power) absorbed by the Earth is thus given by: Because the Stefan–Boltzmann law uses a fourth power, it has a stabilizing effect on the exchange and the flux emitted by Earth tends to be equal to the flux absorbed, close to the steady state where: T⊕ can then be found: where T⊙ is the temperature of the Sun, R⊙ the radius of the Sun, and a0 is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere.
The Earth has an albedo of 0.3, meaning that 30% of the solar radiation that hits the planet gets scattered back into space without absorption. The effect of albedo on temperature can be approximated by assuming that the energy absorbed is multiplied by 0.7, but that the planet still radiates as a black body (the latter by definition of effective temperature, which is what we are calculating). This approximation reduces the temperature by a factor of 0.71/4, giving 255 K (−18 °C). The above temperature is Earth's as seen from space, not ground temperature but an average over all emitting bodies of Earth from surface to high altitude.
Because of the greenhouse effect, the Earth's actual average surface temperature is about 288 K (15 °C), which is higher than the 255 K effective temperature, and even higher than the 279 K temperature that a black body would have. In the above discussion, we have assumed that the whole surface of the earth is at one temperature. Another interesting question is to ask what the temperature of a blackbody surface on the earth would be assuming that it reaches equilibrium with the sunlight falling on it. This of course depends on the angle of the sun on the surface and on how much air the sunlight has gone through.
When the sun is at the zenith and the surface is horizontal, the irradiance can be as high as 1120 W/m2. The Stefan–Boltzmann law then gives a temperature of or 102 °C. (Above the atmosphere, the result is even higher: 394 K.) We can think of the earth's surface as "trying" to reach equilibrium temperature during the day, but being cooled by the atmosphere, and "trying" to reach equilibrium with starlight and possibly moonlight at night, but being warmed by the atmosphere. Origination Thermodynamic derivation of the energy density The fact that the energy density of the box containing radiation is proportional to can be derived using thermodynamics.
This derivation uses the relation between the radiation pressure p and the internal energy density , a relation that can be shown using the form of the electromagnetic stress–energy tensor. This relation is: Now, from the fundamental thermodynamic relation we obtain the following expression, after dividing by and fixing : The last equality comes from the following Maxwell relation: From the definition of energy density it follows that where the energy density of radiation only depends on the temperature, therefore Now, the equality after substitution of and for the corresponding expressions, can be written as Since the partial derivative can be expressed as a relationship between only and (if one isolates it on one side of the equality), the partial derivative can be replaced by the ordinary derivative.
After separating the differentials the equality becomes which leads immediately to , with as some constant of integration. Derivation from Planck's law The law can be derived by considering a small flat black body surface radiating out into a half-sphere. This derivation uses spherical coordinates, with θ as the zenith angle and φ as the azimuthal angle; and the small flat blackbody surface lies on the xy-plane, where θ = /2. The intensity of the light emitted from the blackbody surface is given by Planck's law : where is the amount of power per unit surface area per unit solid angle per unit frequency emitted at a frequency by a black body at temperature T. is Planck's constant is the speed of light, and is Boltzmann's constant.
The quantity is the power radiated by a surface of area A through a solid angle dΩ in the frequency range between ν and ν + dν. The Stefan–Boltzmann law gives the power emitted per unit area of the emitting body, Note that the cosine appears because black bodies are Lambertian (i.e. they obey Lambert's cosine law), meaning that the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle. To derive the Stefan–Boltzmann law, we must integrate dΩ = sin(θ) dθ dφ over the half-sphere and integrate ν from 0 to ∞.
Then we plug in for I: To evaluate this integral, do a substitution, which gives: The integral on the right is standard and goes by many names: it is a particular case of a Bose–Einstein integral, the polylogarithm, or the Riemann zeta function . The value of the integral is , giving the result that, for a perfect blackbody surface: Finally, this proof started out only considering a small flat surface. However, any differentiable surface can be approximated by a collection of small flat surfaces.
So long as the geometry of the surface does not cause the blackbody to reabsorb its own radiation, the total energy radiated is just the sum of the energies radiated by each surface; and the total surface area is just the sum of the areas of each surface—so this law holds for all convex blackbodies, too, so long as the surface has the same temperature throughout. The law extends to radiation from non-convex bodies by using the fact that the convex hull of a black body radiates as though it were itself a black body. Energy density The total energy density U can be similarly calculated, except the integration is over the whole sphere and there is no cosine, and the energy flux (U c) should be divided by the velocity c to give the energy density U: Thus is replaced by , giving an extra factor of 4.
Thus, in total: See also Wien's displacement law Rayleigh–Jeans law Radiance Zero-dimensional models Black body Sakuma–Hattori equation Radó von Kövesligethy Notes References Category:Laws of thermodynamics Category:Power laws Category:Heat transfer
Sean Monahan (born October 12, 1994) is a Canadian professional ice hockey centre and alternate captain for the Calgary Flames of the National Hockey League (NHL). He is a first round selection of the Flames, sixth overall, at the 2013 NHL Entry Draft and played junior hockey with the Ottawa 67's of the Ontario Hockey League (OHL) where he served as team captain. Early life A native of Brampton, Ontario, Sean is the son of Cathy and John Monahan, and has a sister, Jacqueline. He attended St. Thomas Aquinas Secondary School. He played minor hockey and lacrosse for the Brampton Excelsiors, where one of his teammates was former Syracuse and NBA guard Tyler Ennis.
Playing career Junior Monahan played with the Mississauga Rebels. As a 15-year-old in 2010, he captained the Rebels to an OHL Cup title and was named most valuable player of the tournament. He finished the 2009–10 season with 46 goals and 40 assists in 47 games for the Rebels and was then selected by the Ottawa 67's in the first round, 16th overall, at the Ontario Hockey League (OHL) Priority Selection draft. Monahan's junior hockey career began with difficulty as he suffered a sprained wrist in his first training camp with the 67's, resulting in a slow start for him in the 2010–11 OHL season.
An invitation to play in the 2011 World U-17 Hockey Challenge, in which he was a key performer for the gold medal-winning Team Ontario, allowed Monahan to regain confidence; he completed his first OHL season on the 67's second line and recorded 47 points in 65 games. Monahan played in his second international tournament following the season. He joined the Canadian Under-18 National Team for the 2011 Ivan Hlinka Memorial Tournament and scored a goal in the championship game to help Canada win a fourth consecutive gold medal at the event. Playing alongside NHL prospects Tyler Toffoli, Shane Prince and Cody Ceci, Monahan was one of the OHL's top scorers in the 2011–12.
He finished tied for 15th in league scoring with 78 points. He was named to the OHL's Second All-Star team and was the 67's representative on the league's All-Scholastic team. Monahan's third season in Ottawa was a transitional one for the franchise. The 67's had won three consecutive East Division titles between 2010 and 2012, but the graduation of top players caused the team to enter a rebuilding phase. The 67's finished in last place in the 2012–13 OHL season with just 16 wins. Monahan served as the team's captain, sharing the role with Ceci in the first half of the season until the latter player's departure in a trade.
He finished the season with 31 goals and 78 points. He was invited to Team Canada's selection camp for the 2013 World Junior Ice Hockey Championships, but failed to make the team. He also missed ten games during the season after being suspended for an elbowing incident. Calgary Flames Monahan was one of the top-ranked prospects for the 2013 NHL Entry Draft: The NHL Central Scouting Bureau ranked him as the 5th best North American skater in its final ranking while International Scouting Services ranked him 9th overall. Among OHL draft prospects, the league's coaches rated Monahan highly for his intelligence on the ice, playmaking and stickhandling, and for his faceoff ability.
He was selected in the first round, sixth overall, by the Calgary Flames. Upon his selection, the 18-year-old centre expressed his confidence that he was ready to immediately play in the NHL. He earned a spot on the Flames roster to begin the 2013–14 season and made his NHL debut on October 3, 2013, against the Washington Capitals. Monahan scored his first career point in the game, assisting on David Jones' goal in a 5–4 shootout loss. He then scored his first goal the following night against goaltender Sergei Bobrovsky of the Columbus Blue Jackets in a 4–3 win. Though he remained eligible to return to junior without impacting his NHL contract, Monahan scored six goals in his first nine games to earn a permanent spot in Calgary.
In doing so, he became the first junior-eligible player to make the full-time jump to the Flames roster since Kevin LaVallee in 33 years. Monahan scored his 20th goal in a late-season loss to the Ottawa Senators, and in doing so, became the first Flames rookie to score 20 goals since Dion Phaneuf in 2005–06 and first rookie forward since Jarome Iginla in 1996–97 to reach the mark. On August 19, 2016, following back-to-back seasons in which he scored 60 or more points, Monahan, as a restricted free agent signed a seven-year, $44.625 million contract extension to remain in the Flames' organization through 2023.
On November 18, 2017, in a game against the Philadelphia Flyers, Monahan scored his first career hat trick in the second period to help the Flames win 5–4. However, his season was cut short in March due to injuries. During the following month, Monahan underwent four surgeries but was expected to be able to play during the 2018–19 season. Milestones Monahan scored his 100th career goal against Andrei Vasilevskiy of the Tampa Bay Lightning on February 23, 2017. He is the 6th youngest active player to achieve this milestone, joining the elite company of Alexander Ovechkin, Sidney Crosby, Jaromír Jágr, Steven Stamkos, and Patrick Kane.
His 100th goal also marked his 20th of the season, marking the 4 consecutive season he has scored at least 20 goals. He is the youngest player in Flames' history to reach the 100-goal milestone (22 years, 134 days), passing Joe Nieuwendyk, who was 22 years and 185 days old when he scored 100th career goal. He became the fastest player in the Flames’ franchise to record 9 career overtime goals when he scored his 9th on December 7 2017 vs the Montreal Canadiens in a 3-2 win. Career statistics Regular season and playoffs International Awards and honours References External links Category:1994 births Category:Calgary Flames draft picks Category:Calgary Flames players Category:Canadian ice hockey centres Category:Ice hockey people from Ontario Category:Living people Category:National Hockey League first round draft picks Category:Ottawa 67's players Category:Sportspeople from Brampton
Software development is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. Software can be developed for a variety of purposes, the three most common being to meet specific needs of a specific client/business (the case with custom software), to meet a perceived need of some set of potential users (the case with commercial and open source software), or for personal use (e.g.
a scientist may write software to automate a mundane task). Embedded software development, that is, the development of embedded software, such as used for controlling consumer products, requires the development process to be integrated with the development of the controlled physical product. System software underlies applications and the programming process itself, and is often developed separately. The need for better quality control of the software development process has given rise to the discipline of software engineering, which aims to apply the systematic approach exemplified in the engineering paradigm to the process of software development. There are many approaches to software project management, known as software development life cycle models, methodologies, processes, or models.
The waterfall model is a traditional version, contrasted with the more recent innovation of agile software development. Methodologies A software development process (also known as a software development methodology, model, or life cycle) is a framework that is used to structure, plan, and control the process of developing information systems. A wide variety of such frameworks has evolved over the years, each with its own recognized strengths and weaknesses. There are several different approaches to software development: some take a more structured, engineering-based approach to develop business solutions, whereas others may take a more incremental approach, where software evolves as it is developed piece-by-piece.
One system development methodology is not necessarily suitable for use by all projects. Each of the available methodologies is best suited to specific kinds of projects, based on various technical, organizational, project and team considerations. Most methodologies share some combination of the following stages of software development: Analyzing the problem Market research Gathering requirements for the proposed business solution Devising a plan or design for the software-based solution Implementation (coding) of the software Testing the software Deployment Maintenance and bug fixing These stages are often referred to collectively as the software development life-cycle, or SDLC. Different approaches to software development may carry out these stages in different orders, or devote more or less time to different stages.
The level of detail of the documentation produced at each stage of software development may also vary. These stages may also be carried out in turn (a “waterfall” based approach), or they may be repeated over various cycles or iterations (a more "extreme" approach). The more extreme approach usually involves less time spent on planning and documentation, and more time spent on coding and development of automated tests. More “extreme” approaches also promote continuous testing throughout the development life-cycle, as well as having a working (or bug-free) product at all times. More structured or “waterfall” based approaches attempt to assess the majority of risks and develop a detailed plan for the software before implementation (coding) begins, and avoid significant design changes and re-coding in later stages of the software development life-cycle planning.
There are significant advantages and disadvantages to the various methodologies, and the best approach to solving a problem using software will often depend on the type of problem. If the problem is well understood and a solution can be effectively planned out ahead of time, the more "waterfall" based approach may work the best. If, on the other hand, the problem is unique (at least to the development team) and the structure of the software solution cannot be easily envisioned, then a more "extreme" incremental approach may work best. Software development activities Identification of need The sources of ideas for software products are plentiful.
These ideas can come from market research including the demographics of potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated by marketing personnel for economic feasibility, for fit with existing channels distribution, for possible effects on existing product lines, required features, and for fit with the company's marketing objectives. In a marketing evaluation phase, the cost and time assumptions become evaluated. A decision is reached early in the first phase as to whether, based on the more detailed information generated by the marketing and development staff, the project should be pursued further.
In the book "Great Software Debates", Alan M. Davis states in the chapter "Requirements", sub-chapter "The Missing Piece of Software Development" Students of engineering learn engineering and are rarely exposed to finance or marketing. Students of marketing learn marketing and are rarely exposed to finance or engineering. Most of us become specialists in just one area. To complicate matters, few of us meet interdisciplinary people in the workforce, so there are few roles to mimic. Yet, software product planning is critical to the development success and absolutely requires knowledge of multiple disciplines. Because software development may involve compromising or going beyond what is required by the client, a software development project may stray into less technical concerns such as human resources, risk management, intellectual property, budgeting, crisis management, etc.
These processes may also cause the role of business development to overlap with software development. Planning Planning is an objective of each and every activity, where we want to discover things that belong to the project. An important task in creating a software program is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result but do not know what software should do. Skilled and experienced software engineers recognize incomplete, ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.
"Although much effort is put in the requirements phase to ensure that requirements are complete and consistent, rarely that is the case; leaving the software design phase as the most influential one when it comes to minimizing the effects of new or changing requirements. Requirements volatility is challenging because they impact future or already going development efforts." Once the general requirements are gathered from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Designing Once the requirements are established, the design of the software can be established in a software design document.
This involves a preliminary or high-level design of the main modules with an overall picture (such as a block diagram) of how the parts fit together. The language, operating system, and hardware components should all be known at this time. Then a detailed or low-level design is created, perhaps with prototyping as proof-of-concept or to firm up requirements. Implementation, testing and documenting Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important phase of the software development process. This part of the process ensures that defects are recognized as soon as possible.
In some processes, generally known as test-driven development, tests may be developed just before implementation and serve as a guide for the implementation's correctness. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the writing of an API, be it external or internal. The software engineering process chosen by the developing team will determine how much internal documentation (if any) is necessary. Plan-driven models (e.g., Waterfall) generally produce more documentation than Agile models. Deployment and maintenance Deployment starts directly after the code is appropriately tested, approved for release, and sold or otherwise distributed into a production environment.
This may involve installation, customization (such as by setting parameters to the customer's values), testing, and possibly an extended period of evaluation. Software training and support is important, as software is only effective if it is used correctly. Maintaining and enhancing software to cope with newly discovered faults or requirements can take substantial time and effort, as missed requirements may force redesign of the software. . In most cases maintenance is required on regular basis to fix reported issues and keep the software running. Subtopics View model A view model is a framework that provides the viewpoints on the system and its environment, to be used in the software development process.
It is a graphical representation of the underlying semantics of a view. The purpose of viewpoints and views is to enable human engineers to comprehend very complex systems and to organize the elements of the problem and the solution around domains of expertise. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization. Most complex system specifications are so extensive that no one individual can fully comprehend all aspects of the specifications. Furthermore, we all have different interests in a given system and different reasons for examining the system's specifications. A business executive will ask different questions of a system make-up than would a system implementer.
The concept of viewpoints framework, therefore, is to provide separate viewpoints into the specification of a given complex system. These viewpoints each satisfy an audience with interest in some set of aspects of the system. Associated with each viewpoint is a viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. Business process and data modelling Graphical representation of the current state of information provides a very effective means for presenting information to both users and system developers. A business model illustrates the functions associated with the business process being modeled and the organizations that perform these functions.
By depicting activities and information flows, a foundation is created to visualize, define, understand, and validate the nature of a process. A data model provides the details of information to be stored and is of primary use when the final product is the generation of computer software code for an application or the preparation of a functional specification to aid a computer software make-or-buy decision. See the figure on the right for an example of the interaction between business process and data models. Usually, a model is created after conducting an interview, referred to as business analysis. The interview consists of a facilitator asking a series of questions designed to extract required information that describes a process.
The interviewer is called a facilitator to emphasize that it is the participants who provide the information. The facilitator should have some knowledge of the process of interest, but this is not as important as having a structured methodology by which the questions are asked of the process expert. The methodology is important because usually a team of facilitators is collecting information across the facility and the results of the information from all the interviewers must fit together once completed. The models are developed as defining either the current state of the process, in which case the final product is called the "as-is" snapshot model, or a collection of ideas of what the process should contain, resulting in a "what-can-be" model.
Generation of process and data models can be used to determine if the existing processes and information systems are sound and only need minor modifications or enhancements, or if re-engineering is required as a corrective action. The creation of business models is more than a way to view or automate your information process. Analysis can be used to fundamentally reshape the way your business or organization conducts its operations. Computer-aided software engineering Computer-aided software engineering (CASE), in the field software engineering, is the scientific application of a set of software tools and methods to the development of software which results in high-quality, defect-free, and maintainable software products.
It also refers to methods for the development of information systems together with automated tools that can be used in the software development process. The term "computer-aided software engineering" (CASE) can refer to the software used for the automated development of systems software, i.e., computer code. The CASE functions include analysis, design, and programming. CASE tools automate methods for designing, documenting, and producing structured computer code in the desired programming language. Two key ideas of Computer-aided Software System Engineering (CASE) are: Foster computer assistance in software development and software maintenance processes, and An engineering approach to software development and maintenance.
Typical CASE tools exist for configuration management, data modeling, model transformation, refactoring, source code generation. Integrated development environment An integrated development environment (IDE) also known as integrated design environment or integrated debugging environment is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of a: Source code editor, Compiler or interpreter, Build automation tools, and Debugger (usually). IDEs are designed to maximize programmer productivity by providing tight-knit components with similar user interfaces. Typically an IDE is dedicated to a specific programming language, so as to provide a feature set which most closely matches the programming paradigms of the language.
Modeling language A modeling language is any artificial language that can be used to express information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure. A modeling language can be graphical or textual. Graphical modeling languages use a diagram techniques with named symbols that represent concepts and lines that connect the symbols and that represent relationships and various other graphical annotation to represent constraints. Textual modeling languages typically use standardised keywords accompanied by parameters to make computer-interpretable expressions. Examples of graphical modelling languages in the field of software engineering are: Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a process modeling language.
EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language. Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across layers. Flowchart is a schematic representation of an algorithm or a stepwise process, Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems. IDEF is a family of modeling languages, the most notable of which include IDEF0 for functional modeling, IDEF1X for information modeling, and IDEF5 for modeling ontologies. LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modelling large object-oriented (Java, C++, C#) programs and design patterns.
Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behaviour of reactive and distributed systems. Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques and has widespread tool support. Not all modeling languages are executable, and for those that are, using them doesn't necessarily mean that programmers are no longer needed. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more difficult problems, such as parallel computing and distributed systems.
Programming paradigm A programming paradigm is a fundamental style of computer programming, which is not generally dictated by the project management methodology (such as waterfall or agile). Paradigms differ in the concepts and abstractions used to represent the elements of a program (such as objects, functions, variables, constraints) and the steps that comprise a computation (such as assignations, evaluation, continuations, data flows). Sometimes the concepts asserted by the paradigm are utilized cooperatively in high-level system architecture design; in other cases, the programming paradigm's scope is limited to the internal structure of a particular program or module. A programming language can support multiple paradigms.
For example, programs written in C++ or Object Pascal can be purely procedural, or purely object-oriented, or contain elements of both paradigms. Software designers and programmers decide how to use those paradigm elements. In object-oriented programming, programmers can think of a program as a collection of interacting objects, while in functional programming a program can be thought of as a sequence of stateless function evaluations. When programming computers or systems with many processors, process-oriented programming allows programmers to think about applications as sets of concurrent processes acting upon logically shared data structures. Just as different groups in software engineering advocate different methodologies, different programming languages advocate different programming paradigms.
Some languages are designed to support one paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms (such as Object Pascal, C++, C#, Visual Basic, Common Lisp, Scheme, Python, Ruby, and Oz). Many programming paradigms are as well known for what methods they forbid as for what they enable. For instance, pure functional programming forbids using side-effects; structured programming forbids using goto statements. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to earlier styles. Avoiding certain methods can make it easier to prove theorems about a program's correctness, or simply to understand its behavior.
Examples of high-level paradigms include: Aspect-oriented software development Domain-specific modeling Model-driven engineering Object-oriented programming methodologies Grady Booch's object-oriented design (OOD), also known as object-oriented analysis and design (OOAD). The Booch model includes six diagrams: class, object, state transition, interaction, module, and process. Search-based software engineering Service-oriented modeling Structured programming Top-down and bottom-up design Top-down programming: evolved in the 1970s by IBM researcher Harlan Mills (and Niklaus Wirth) in developed structured programming. Reuse of solutions A definition of software reuse is the process of creating software from predefined software components. These are few software reuse methods. A software framework is a re-usable design or implementation for a software system or subsystem.
Existing components (Component-based software engineering) can be reused, assembled together to create a larger application. API (Application programming interface, Web service) establish a set of "subroutine definitions, protocols, and tools for building application software" which can be utilized in future builds. Open Source documentations, via libraries such as GitHub, provide free code for software developers to re-use and implement into new applications or designs. See also Continuous integration Custom software DevOps Functional specification Programming productivity Software blueprint Software design Software development effort estimation Software development process Software project management Specification and Description Language User experience Software industry Roles and industry Bachelor of Science in Information Technology Computer programmer Consulting software engineer Offshore software development Software developer Software engineer Software publisher Specific applications Video game development Web application development Web engineering Mobile Application Development References Further reading John W. Horch (2005).
"Two Orientations On How To Work With Objects." In: IEEE Software. vol. 12, no. 2, pp. 117–118, Mar., 1995. Development Category:Computer occupations Category:Product development sq:Zhvillimi i softuerit fi:Ohjelmistokehitys
This is a list of (recent) recessions (and depressions) that have affected the economy of the United Kingdom. In the United Kingdom and all other EU member states, a recession is generally defined as two successive quarters of negative economic growth, as measured by the seasonally adjusted quarter-on-quarter figures for real GDP. See also List of recessions in the United States List of stock market crashes Office for National Statistics References External links Office for National Statistics website ONS quarterly GDP growth UK National Income, Expenditure and Output Latest Bank of England inflation report (PDF sections) Bank of England February 2009 Quarterly inflation report - Much data, including (on p20) previous 3 UK recessions.
"What is the difference between a recession and a depression?" Saul Eslake November 2008 UK economy tracker BBC News - comparison of UK recessions - updated quarterly Recession * Category:Financial crises Recessions Recessions Category:Financial history of the United Kingdom
In statistical mechanics, the cluster expansion (also called the high temperature expansion or hopping expansion) is a power series expansion of the partition function of a statistical field theory around a model that is a union of non-interacting 0-dimensional field theories. Cluster expansions originated in the work of . Unlike the usual perturbation expansion, it converges in some non-trivial regions, in particular when the interaction is small. Classical case General theory In statistical mechanics, the properties of a system of noninteracting particles are described using the partition function. For N noninteracting particles, the system is described by the Hamiltonian , and the partition function can be calculated (for the classical case) as From the partition function, one can calculate the Helmholtz free energy and, from that, all the thermodynamic properties of the system, like the entropy, the internal energy, the chemical potential etc.
When the particles of the system interact, an exact calculation of the partition function is usually not possible. For low density, the interactions can be approximated with a sum of two-particle potentials: For this interaction potential, the partition function can be written as , and the free energy is , where Q is the configuration integral: Calculation of the configuration integral The configuration integral cannot be calculated analytically for a general pair potential . One way to calculate the potential approximately is to use the Mayer cluster expansion. This expansion is based on the observation that the exponential in the equation for can be written as a product of the form .
Next, define the Mayer function by . After substitution, the equation for the configuration integral becomes: The calculation of the product in the above equation leads into a series of terms; the first is equal to one, the second term is equal to the sum over i and j of the terms , and the process continues until all the higher order terms are calculated. Each term must appear only once. With this expansion it is possible to find terms of different order, in terms of the number of particles that are involved. The first term is the non-interaction term (corresponding to no interactions amongst particles), the second term corresponds to the two-particle interactions, the third to the two-particle interactions amongst 4 (not necessarily distinct) particles, and so on.
This physical interpretation is the reason this expansion is called the cluster expansion: the sum can be rearranged so that each term represents the interactions within clusters of a certain number of particles. Substituting the expansion of the product back into the expression for the configuration integral results in a series expansion for : Substituting in the equation for the free energy, it is possible to derive the equation of state for the system of interacting particles. The equation will have the form , which is known as the virial equation, and the components are the virial coefficients. Each of the virial coefficients corresponds to one term from the cluster expansion ( is the two-particle interaction term, is the three-particle interaction term and so on).
Keeping only the two-particle interaction term, it can be shown that the cluster expansion, with some approximations, gives the Van der Waals equation. This can be applied further to mixtures of gases and liquid solutions. References , chapter 9. Category:Statistical mechanics
The Empire Builder is an Amtrak long-distance passenger train that operates daily between Chicago and (via two sections west of Spokane) Seattle and Portland. Introduced in 1929, it was the flagship passenger train of the Great Northern Railway and its successor, the Burlington Northern Railroad, and was retained by Amtrak when it took over intercity rail service in 1971. The end-to-end travel time of the route is 45–46 hours for an average speed of about , though the train travels as fast as over the majority of the route. It is Amtrak's busiest long-distance route. During fiscal year 2019, the Empire Builder carried 433,372 passengers, an increase of 1.1% from FY2018.
During FY2016, the train had a total revenue of $51,798,583, an increase of 2.5% over FY2015. History The Great Northern Railway inaugurated the Empire Builder on June 10, 1929. It was named in honor of the company's founder, James J. Hill, who had reorganized several failing railroads into the only successful attempt at a privately-funded transcontinental railroad. It reached the Pacific Northwest in the late 19th century, and for this feat, he was nicknamed "The Empire Builder." Following World War II, Great Northern placed new streamlined and diesel-powered trains in service that cut the scheduled 2,211-mile-trip between Chicago and Seattle from 58.5 hours to 45 hours.
The schedule allowed riders views of the Cascade Mountains and Glacier National Park, a park established through the lobbying efforts of the Great Northern. Re-equipped with domes in 1955, the Empire Builder offered passengers sweeping views of the route through three dome coaches and one full-length Great Dome car for first class passengers. In 1970, the Great Northern merged with three other closely affiliated railroads to form the Burlington Northern Railroad, which assumed operation of the Builder. Amtrak took over the train when it began operating most intercity routes a year later, and shifted the Chicago–St. Paul leg to the Milwaukee Road route through Milwaukee along the route to St Paul.
Before 1971, the Chicago–St. Paul leg used the Chicago, Burlington and Quincy Railroad's mainline along the Mississippi River through Wisconsin. The service also used to operate west from the Twin Cities before turning northwest in Willmar, Minnesota, to reach Fargo. Amtrak added a Portland section in 1981, with the train splitting in Spokane. This restored service to the line previously operated by the Spokane, Portland and Seattle Railway. It was not the first time that the train had operated Seattle and Portland sections; Great Northern had split the Builder in Spokane for much of the 1940s and 1950s. In 2005, Amtrak upgraded service to include a wine and cheese tasting in the dining car for sleeping car passengers and free newspapers in the morning.
Amtrak's inspector general eliminated some of these services in 2013 as part of a cost-saving measure.<ref>{{cite web|url=https://www.msn.com/en-us/news/politics/to-see-why-amtraks-losses-mount-hop-on-the-empire-builder-train/ar-BBklbKA|title=To See Why Amtrak's Losses Mount, Hop on the Empire Builder Train|publisher=msn.com|accessdate=2016-03-07|archive-url=https://web.archive.org/web/20160308123210/http://www.msn.com/en-us/news/politics/to-see-why-amtraks-losses-mount-hop-on-the-empire-builder-train/ar-BBklbKA|archive-date=2016-03-08|url-status=dead}}</ref> During summer months, on portions of the route, "Trails and Rails" volunteer tour guides in the lounge car give commentary on points of visual and historic interest that can be viewed from the train. Ridership The Empire Builder is Amtrak's most popular long-distance train. Over fiscal years 2007–2016, Empire Builder annual ridership averaged 500,000, with a high of 554,266 in FY 2008. Revenue peaked in FY 2013 at $67,394,779. About 65% of the cost of operating the train is covered by fare revenue, a rate among Amtrak's long-distance trains second only to the specialized East Coast Auto Train.
Route The current Amtrak Empire Builder passes through Oregon, Washington, Idaho, Montana, North Dakota, Minnesota, Wisconsin, and Illinois. It makes service stops in Spokane, Washington, Havre, Montana, Minot, North Dakota, and Saint Paul, Minnesota. Its other major stops include Vancouver, Washington, Whitefish, Montana, Fargo, North Dakota, and Milwaukee, Wisconsin. It uses BNSF Railway's Northern Transcon from Seattle to Minneapolis, Minnesota Commercial from Minneapolis to St. Paul, the Canadian Pacific (former Milwaukee Road) from St. Paul to Rondout, Illinois, and Metra's Milwaukee District / North Line (former Milwaukee Road) from Rondout to Chicago. The St. Paul to Chicago portion currently follows the route of the former Twin Cities Hiawatha.
In pre-Amtrak days it used the Twin Zephyrs routing. The Seattle section uses the Cascade Tunnel and Stevens Pass as it traverses the Cascade Range to reach Spokane, while the Portland section runs along the Washington side of the Columbia River Gorge. The cars from the two sections are combined at Spokane. The combined train then traverses the mountains of northeastern Washington, northern Idaho and northwestern Montana, arriving in Whitefish in the morning. The schedule is timed so that the train passes through the Rocky Mountains (and Glacier National Park) during daylightan occurrence that is more likely on the eastbound train during summer.
Passengers can see sweeping views as the Builder travels along the middle fork of the Flathead River, crossing the Continental Divide at Marias Pass. After crossing Marias Pass, the Empire Builder leaves Glacier National Park and enters the Northern Plains of eastern Montana and North Dakota. The land changes from prairie to forest as it travels through Minnesota. From Minneapolis-St. Paul, the Builder crosses the Mississippi River at Hastings, Minnesota and passes through southeastern Minnesota cities on or near Lake Pepin before crossing the Mississippi again at La Crosse, Wisconsin. It passes through rural southern Wisconsin, turns south at Milwaukee, and ends at Chicago Union Station.
The westbound Empire Builder leaves Chicago in early afternoon, arriving in Milwaukee just before the afternoon rush and in St. Paul in the evening. After traveling overnight through Minnesota, it spends most of the following day traveling through North Dakota and Montana, arriving at Glacier National Park in the early evening and splittling late at night in Spokane. The Seattle section travels through the Cascades overnight, arriving in Seattle in mid-morning. The Portland section arrives in the Tri-Cities just before breakfast and in Portland in mid-morning. The eastbound Seattle and Portland sections leave within five minutes of each other just before the afternoon rush, combining in Spokane and traveling through Montana overnight before arriving at Glacier National Park in mid-morning and Williston at dinner time.
After traveling overnight through North Dakota and Minnesota, it arrives in St. Paul at breakfast time, Columbus/Madison at lunch time, Milwaukee in early afternoon and Chicago just before the afternoon rush. Flooding The line has come under threat from flooding from the Missouri, Souris, Red, and Mississippi Rivers, and has occasionally had to suspend or alter service. Most service gets restored in days or weeks, but Devils Lake in North Dakota, which has no natural outlet, is a long-standing threat. The lowest top-of-rail elevation in the lake crossing is . In spring 2011, the lake reached , causing service interruptions on windy days when high waves threatened the tracks.
BNSF, which owns the track, suspended freight operations through Devils Lake in 2009 and threatened to allow the rising waters to cover the line unless Amtrak could provide $100 million to raise the track. In that case, the Empire Builder would have been rerouted to the south, ending service to Rugby, Devils Lake, and Grand Forks. In June 2011 agreement was reached that Amtrak and BNSF would each cover 1/3 of the cost with the rest to come from the federal and state governments. In December 2011, North Dakota was awarded a $10 million TIGER grant from the US Department of Transportation to assist with the state portion of the cost.
Work began in June 2012, and the track is being raised in two stages: 5 feet in 2012, and another 5 feet in 2013. Two bridges and their abutments are also being raised. When the track raise is complete, the top-of-rail elevation will be . This is 10 feet above the level at which the lake will naturally overflow and will thus be a permanent solution to the Devils Lake flooding. In the spring and summer of 2011 flooding of the Souris River near Minot, North Dakota blocked the route in the latter part of June and for most of July.
For some of that time the Empire Builder (with a typical consist of only four cars) ran from Chicago and terminated in Minneapolis/St Paul; to the west, the Empire Builder did not run east of Havre, Montana. (Other locations along the route also flooded, near Devils Lake, North Dakota and areas further west along the Missouri River.) Freight train interference An oil boom from the Bakken formation, combined with a robust fall 2013 harvest, led to a spike in the number of crude oil and grain trains using the BNSF tracks in Montana and North Dakota. The resulting congestion led to terrible delays for the Empire Builder, with the train receiving a 44.5% on-time record for November 2013, the worst on-time performance on the Amtrak network and well below congressional standards.
In some cases, the delays resulted in an imbalance of crew and equipment, forcing Amtrak to cancel runs of the Empire Builder. In May 2014, only 26% of Empire Builder trains had arrived within 30 minutes of their scheduled time, and delays averaged between 3 and 5 hours. In some cases, freight congestion and severe weather resulted in delays as long as 11 to 12 hours. This was a marked change from past years in which the Empire Builder was one of the best on-time performers in the entire system, ahead of even the flagship Acela Express.Due to the routine severe delays, Amtrak changed the schedule for stations west of St. Paul on April 15, 2014.
Scheduled times for westbound trains from St. Paul were made later, while eastbound, the train departed Seattle/Portland approximately three hours earlier. Operating hours for affected stations were also officially adjusted accordingly. The Amtrak announcement also said that the BNSF Railway was working on adding track capacity, and it was anticipated that sometime in 2015 the Empire Builder could be returned to its former schedule. In January 2015, it was announced that the train would resume its normal schedule. Former stops In 1970 the construction and filling of Lake Koocanusa necessitated the realignment of 60 miles of track between Stryker, Montana and Libby, Montana, and the construction of Flathead Tunnel, leading the Empire Builder to drop service to Eureka, Montana.
The Empire Builder also served Troy, Montana until February 15, 1973. On October 1, 1979, the Empire Builder was rerouted to operate over the North Coast Hiawatha's old route between Minneapolis and Fargo, North Dakota. With this alignment change, the Empire Builder dropped Willmar, Minnesota, Morris, Minnesota, and Breckenridge, Minnesota, while adding St. Cloud, Minnesota, Staples, Minnesota, and Detroit Lakes, Minnesota. Another alignment change came on October 25, 1981, when the Seattle section was rerouted from the old Northern Pacific (which had also become part of the BN in 1970) to the Burlington Northern Railroad's line through the Cascade Tunnel over Stevens Pass.
This change eliminated service to Yakima, Washington, Ellensburg, Washington, and Auburn, Washington. This change also introduced the Portland section, which returned service to the former Spokane, Portland and Seattle Railroad line (which became part of BN in 1970) along the Washington shore of the Columbia River. The route kept Pasco, but added Wishram, Bingen-White Salmon, and Vancouver (all in Washington) to the route. From Vancouver, the Portland section of the Empire Builder uses the same route as the Coast Starlight and Cascades trains to Portland Union Station. It has been proposed that the Empire Builder and Hiawatha Service trains servicing Glenview, Illinois have their station stop be shifted one station north to the Metra station at North Glenview, to eliminate stops which block traffic on Glenview Road.
North Glenview would have to be modified to handle additional traffic, and the move depends on commitments from Glenview, the Illinois General Assembly, and Metra. In Minnesota, the Empire Builder returned to Saint Paul Union Depot on May 7, 2014, 43 years after it last served the station the day before the start of Amtrak. Renovation of the 1917 Beaux Arts terminal was undertaken in 2011, continuing through 2013, resulting in a multi-mode terminal used by Jefferson Bus Lines, Greyhound Bus lines, commuter bus and the Metro Green Line, providing a light rail connection to downtown Minneapolis. The station replaced Midway Station which opened in 1978 after the initial abandonment of Saint Paul Union Depot in 1971 and the demolition of Minneapolis Great Northern Depot in 1978.
Equipment Current equipment Like all long-distance trains west of the Mississippi River, the Empire Builder uses bilevel Superliner passenger cars. The Empire Builder was the first train to be fully equipped with Superliners, with the first run occurring on October 28, 1979. In Summer, 2005 the train was "re-launched" with newly refurbished equipment.
A typical Empire Builder consist is configured as follows (with the assigned section west of Spokane shown in parentheses): Two GE Genesis P42 locomotives Viewliner baggage car (Seattle) Transitional Crew Sleeper (Seattle) Sleeper (Seattle) Sleeper (Seattle) Diner (Seattle) Coach (Seattle) Coach (Seattle) Sightseer Lounge/Café (Portland) Coach/Baggage (Portland) Coach (Portland) Sleeper (Portland) Sleeper (Portland) Coach (Chicago - St. Paul only) In Spokane, the westbound train is split: the locomotives, baggage car, and first six passenger cars (including the diner) continue on to Seattle, while a single P42 locomotive from Spokane is used to take the rearmost five cars (including the lounge/cafe) to Portland.
Eastbound the sections are combined in a reverse fashion. To add capacity during peak travel periods, an additional coach is added to the rear of the train between Chicago and St. Paul. It is left overnight in St. Paul for the next day's return trip to pick up. This car is designated train 807/808, while the cars in the Portland section are designated train 27/28 and the Seattle section is designated train 7/8. This adds capacity during especially busy times in the year. Historical equipment When first launched in 1929, the Great Northern provided new heavyweight consists. When the railway received five new streamlined trainsets in 1947, the old heavyweight sets were used to reintroduce the Oriental Limited.
In 1951 the Empire Builder was re-equipped with six new streamlined trainsets; the 1947 cars were used to launch the Western Star, while the Oriental Limited was retired. When the GN finally acquired dome coaches in 1955, the 1951 coaches went to Western Star, while the 1947 coaches went to the pool of spare and extra-movement cars. Ownership of the cars on the Empire Builder'' was by-and-large split between the Great Northern and the Chicago, Burlington and Quincy Railroad (CB&Q), though a couple of cars in the original consists were owned by the Spokane, Portland and Seattle Railway (SP&S). In this consist, one of the 48-seat "chair" cars and one of the 4-section sleepers were used for the connection to Portland, while the rest of the consist connected to Seattle.
The Great Northern coaches eventually found their way into state-subsidized commuter service for the Central Railroad of New Jersey after the Burlington Northern merger and remained until 1987 when NJ Transit retired its last E8A locomotive. Some of these cars remain in New Jersey. Some coaches were acquired from the Union Pacific; these also went to New Jersey. One of the 28 seat coach-dinette cars also remains in New Jersey and is stored near Interstate 78 wearing tattered Amtrak colors.
Notes Footnotes References Further reading External links Category:Amtrak routes Category:Passenger trains of the Great Northern Railway (U.S.) Category:North American streamliner trains Category:Passenger rail transportation in Illinois Category:Passenger rail transportation in Wisconsin Category:Passenger rail transportation in Minnesota Category:Passenger rail transportation in North Dakota Category:Passenger rail transportation in Montana Category:Passenger rail transportation in Idaho Category:Passenger rail transportation in Oregon Category:Passenger rail transportation in Washington (state) Category:Railway services introduced in 1929 Category:Night trains of the United States
The European Sleep Apnea Database (ESADA) (also referred to with spelling European Sleep Apnoea Database and European Sleep Apnoea Cohort) is a collaboration between European sleep centres as part of the European Cooperation in Science and Technology (COST) Action B 26. The main contractor of the project is the Sahlgrenska Academy at Gothenburg University, Institute of Medicine, Department of Internal Medicine, and the co-ordinator is Jan Hedner, MD, PhD, Professor of Sleep Medicine. The book Clinical Genomics: Practical Applications for Adult Patient Care said ESADA was an example initiatives which afford an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS).
Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource. History 2006 – 2010 In 2006 the European Sleep Apnea Database (ESADA) began as an initiative between 27 European sleep study facilities to combine information and compile it into one shared resource. It was formed as part of the European Cooperation in Science and Technology (COST) Action B 26. In addition to financial help from COST, the initiative received assistance from companies Philips Respironics and ResMed. The database storing the association's resource information is located in Gothenburg, Sweden.
The group's goal was twofold: to serve as a reference guide to those researching sleep disorders, and to compile information about how different caregivers treat patients suffering from sleep apnea. 5,103 patients were tracked from March 2007 to August 2009. Data collected on these patients included symptoms suffered, medication, medical history, and sleep data, all inputted into an online format for further analysis. Database researchers reported on their methodology and results in 2010 to the American Thoracic Society, on their observed findings regarding percentages of metabolic and cardiovascular changes related to patients with obstructive sleep apnea. The 2010 research resulted from collaboration between 22 study centres across 16 countries in Europe involving 27 researchers.
The primary participants who presented to the American Thoracic Society included researchers from: Sahlgrenska University Hospital, Gothenburg, Sweden; Technion – Israel Institute of Technology, Haifa, Israel; National TB & Lung Diseases Research Institute, Warsaw, Poland; CNR Institute of Biomedicine and Molecular, Palermo, Italy; Instituto Auxologico Italiano, Ospedale San Luca, Milan, Italy; and St. Vincent University Hospital, Dublin, Ireland. Their analysis was published in 2010 in the American Journal of Respiratory and Critical Care Medicine. 2011 – present In 2011 there were 22 sleep disorder centres in Europe involved in the collaboration. The group published research in 2011 analyzing the percentage of patients suffering from sleep apnea that have obesity.
By 2012 the database maintained information on over 12,500 patients in Europe; it also contained DNA samples of 2,600 individuals. ESADA was represented in 2012 at the 21st annual meeting of the European Sleep Research Society in Paris, France, and was one of four European Sleep Research Networks that held a session at the event. Pierre Escourrou and Fadia Jilwan wrote a 2012 article for the European Respiratory Journal after studying data from ESADA involving 8,228 total patients from 23 different facilities. They analyzed whether polysomnography was a good measure for hypopnea and sleep apnea. Researchers from the department of pulmonary diseases at Turku University Hospital in Turku, Finland compared variations between sleep centres in the ESADA database and published their findings in the European Respiratory Journal.
They looked at the traits of 5,103 patients from 22 centres. They reported on the average age of patients in the database, and the prevalence by region of performing sleep study with cardiorespiratory polygraphy. The database added a centre in Hamburg, Germany in 2013 managed by physician Holger Hein. The group's annual meeting in 2013 was held in Edinburgh, United Kingdom and was run by Renata Riha. By March 2013, there were approximately 13,000 total patients being studied in the program, with about 200 additional patients being added into the database each month. Analysis published by researchers from Italy and Sweden in September 2013 in the European Respiratory Journal analyzed if there was a correlation between renal function problems and obstructive sleep apnea.
They analyzed data from 17 countries in Europe representing 24 sleep centres and 8,112 total patients. They tested whether patients of different types of demographics with other existing health problems had a change in probability of kidney function problems, if they concurrently suffered from obstructive sleep apnea. In 2014, researchers released data studying 5,294 patients from the database compared prevalence of sleep apnea with increased blood sugar. Their results were published in the European Respiratory Journal. They studied glycated hemoglobin levels in the patients and compared them with measured severity in sleep apnea. The researchers analyzed glycated hemoglobin levels among a class of individuals with less severe sleep apnea and those with a higher determined amount of sleep apnea problems.
As of 20 March 2014 the database included information on a total of 15,956 patients. A 2014 article in the European Respiratory Journal drawing from the ESADA analyzed whether lack of adequate oxygen during a night's sleep was an indicator for high blood pressure. Reception In the 2013 book Clinical Genomics: Practical Applications for Adult Patient Care, ESADA is said to be an example of the kind of initiative which affords an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS). Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource.
See also Catathrenia Deviated septum Narcolepsy Obesity hypoventilation syndrome Congenital central hypoventilation syndrome Sleep medicine Sleep sex Snoring Notes References Further reading External links Category:Sleep disorders Category:University of Gothenburg Category:Databases in Sweden Category:Health informatics Category:Science and technology in Europe Category:Organizations established in 2006 Category:Pulmonology and respiratory therapy organizations Category:International medical associations of Europe
The Mississippi River Delta is the river delta at the confluence of the Mississippi River with the Gulf of Mexico, in Louisiana in the southeastern United States. It is a area of land that stretches from Vermilion Bay on the west, to the Chandeleur Islands in the east, on Louisiana's southeastern coast. It is part of the American Mediterranean Sea and the Louisiana coastal plain, one of the largest areas of coastal wetlands in the United States. The Mississippi River Delta is the 7th largest river delta on Earth (USGS) and is an important coastal region for the United States, containing more than of coastal wetlands and 37% of the estuarine marsh in the conterminous U.S.
The coastal area is the nation's largest drainage basin and drains about 41% of the contiguous United States into the Gulf of Mexico at an average rate of . History and growth of the Mississippi River Delta The modern Mississippi River Delta formed over the last approximately 4,500 years as the Mississippi River deposited sand, clay and silt along its banks and in adjacent basins. The Mississippi River Delta is a river-dominated delta system, influenced by the largest river system in North America. The shape of the current birdfoot delta reflects the dominance the river exerts over the other hydrologic and geologic processes at play in the northern Gulf of Mexico.