content
stringlengths
275
370k
US National Estuarine Research Reserve System The United States National Estuarine Research Reserve System (NERRS https://coast.noaa.gov/nerrs/) was established as an element of the Coastal Zone Management Act of 1972. It creates a representative system of locally managed estuarine reserves that conduct long-term research, water quality monitoring and educational programs designed to promote coastal stewardship. As of 2008, this network of protected areas includes 27 reserves. The program oversees more than one million acres (4,000 km²) of estuarine land, wetlands, and water. Goals and Priority Issues NERRS is structured as a partnership between individual states and the Federal Government and is the research arm of the U.S. Coastal Zone Management Program. The U.S. National Oceanic and Atmospheric Administration (NOAA) within the Department of Commerce administers the program. The goal of NERRS is to conduct long-term research and education on, and promote stewardship of, coastal wetlands and estuaries in a range of sites from those that are pristine to those heavily impacted. Strategic goals for 2005-2010 are to “strengthen the protection and management of representative estuarine ecosystems to advance estuarine conservation, research and education; increase the use of reserve science and sites to address priority coastal management issues; and enhance people’s ability and willingness to make informed decisions and take responsible actions that affect coastal communities and ecosystems (NOAA website).” A NERRS priority is to communicate research findings to coastal managers. The system’s 2005-2010 Strategic Plan identifies four priority national issues for research: impacts of land use and population growth; habitat loss and alteration; water quality degradation; and changes in biological communities. Governance Framework of the Program The process for designating new NERRs begins when a state governor submits a letter of interest to NOAA requesting funds to identify a site and select the local lead agency. If NOAA approves the request, it provides up to $100,000 (a 50% match from the state is required) to select the site and prepare a basic characterization of the site’s physical, chemical and biological characteristics; an Environmental Impact Statement; and a Management Plan. NOAA requires extensive public participation and collaboration in the designation process. Management partners in a NERR may include state agencies, non-profit groups, universities and members of the local community. The NERR may also work with SeaGrant extension and education staff and others in identifying key coastal resource issues to address. NOAA provides annual core funding and the state must provide matching funds. Combined, these are used to develop management plans, conduct school and public education programs, maintain reserve facilities and acquire new properties. As a national system of protected areas, the NERRS conducts the following core programs at each site: - System-wide Monitoring Program: tracks changes over the long-term to understand how human activities and natural events impact coastal ecosystems - Graduate Research Fellows Program: provides students the opportunity to conduct research at a reserve - Coastal Training Program(CTP): targets the needs of local decision-maker by offering information, skills building, lectures, and demonstration projects and providing networking opportunities that can foster new collaborative solutions - School and Public Education Programs: build stewardship for coastal estuaries within the general public. NERRS is mandated "to enhance public awareness and understanding of estuarine areas, and provide suitable opportunities for public education and interpretation." Most reserves provide experiential education programs for elementary and secondary schools. The Estuary Live program enables students to learn over the internet. - Stewardship through working with the surrounding local communities to participate in such activities as land acquisition, restoration habitat mapping and policy development. The NERRS, NOAA and state coastal management programs work together with the Cooperative Institute for Coastal and Estuarine Environmental Technology (CICEET) to produce practical tools for restoring and managing coastal ecosystems. - Estuaries and tidal rivers - Channel Islands National Marine Sanctuary – Case Study - US Coastal Zone Management Program - Coastal Barrier Resources System - Overview of Coastal Habitat Protection and Restoration in the United States - Essential Fish Habitat - Chesepeake Bay Program - Clean Water Act - US National Estuary Program - US National Marine Sanctuaries - US National Wildlife Refuge System - Rhode Island Salt Pond Special Area Management Plan – Case Study - US Sea Grant College Program - Tampa Bay Estuary Program - US Army Corps of Engineers’ Coastal Programs - NERRS Homepage https://coast.noaa.gov/nerrs/ - Estuaries EPA https://www.epa.gov/nep - Coastal and Estuarine Research Federation http://erf.org/ - National Estuarine Research Reserve Association http://nerra.org/ - Cooperative Institute for Coastal and Estuarine Environmental Technology http://worldoceanobservatory.org/directory-listing/cooperative-institute-coastal-and-estuarine-environmental-technology-ciceet Please note that others may also have edited the contents of this article.
The pursuit of statehood dominated Maine politics between 1787 and 1820, when it finally achieved statehood separate from Massachusetts. Until 1820, the District of Maine simply comprised the eastern counties of the Commonwealth of Massachusetts and thus shared all its political characteristics (see Massachusetts entry). There were significant efforts at statehood in 1788–1789, 1792, 1803, 1816, but the district's populous coastal communities proved unwilling to sever their connections with the Commonwealth. After 1803 the statehood issue increasingly became identified with Jeffersonianism. Backcountry residents became increasingly restive, in no small part because of antipathy to absentee proprietors who owned vast swathes of Maine's undeveloped hinterland. The situation remained volatile until the issue became politicized by Jeffersonian leaders who saw a chance to land a major blow against the Boston-based Federalist elite. The dominant figure in the struggle for statehood was William King, a wealthy merchant who based his political career on the grievances of squatters and religious dissenters such as himself. In a timely defection, in 1803 he became a Republican, portending the district's conversion; in 1805 the District of Maine voted for a Jeffersonian gubernatorial candidate, and a majority of its voters never supported Federalism thereafter. The War of 1812 proved a catalyst for statehood. Militarily abandoned by Massachusetts, Mainers increasingly realized that only statehood would endow them with a political voice. Yet an 1816 statehood effort failed. Stung by the defeat, King realized that Maine's coastal communities would not break with old Massachusetts as a consequence of a peculiarity in federal navigation laws. Utilizing his political connections in Washington, King helped refashion national maritime policies in such a way that separation did not threaten the shipping trades so essential to Maine's coastal communities. With this obstacle removed, in July 1819 an election based on separation passed in all nine Maine counties; by October, representatives held a constitutional convention. Maine's constitution departed significantly from that of Massachusetts and can be seen as a triumph of Jeffersonian principles. It guaranteed freedom of both speech and press; absolute freedom of religion; and universal male suffrage for those over twenty-one, with no property qualifications whatever and no racial restrictions. Maine's legislature was bicameral, featuring a House of Representatives and a Senate, with November elections every two years for both houses. Unlike Massachusetts, which, to ensure the dominance of Suffolk County, based the number of senators on each county’s wealth, Maine apportioned senators on the basis of population. The new state's executive powers were somewhat altered from those of old Massachusetts, which arguably had the strongest governorship in the nation, but they nonetheless remained strong. Governors were not required to be Christians, and they served a four-year term. There was no lieutenant governor; the president of the Senate was designated the successor to any governor incapacitated. The combined Senate and House elected a seven-member council to assist the governor. Congress approved Maine's statehood in 1820 as part of the "Missouri Compromise." Given his prominence in the statehood movement, it is appropriate that King became Maine's first governor. - Banks, Ronald F. Maine Becomes a State: The Movement to Separate Maine from Massachusetts, 1785–1820.Middletown, CT: Published for the Maine Historical Society by Wesleyan University Press, 1970. - Formisano, Ronald P. The Transformation of Political Culture: Massachusetts Parties, 1790s–1840s.New York: Oxford University Press, 1983. - Goodman, Paul. The Democratic-Republicans of Massachusetts; Politics in a Young Republic.Cambridge, MA: Harvard University Press, 1964. - Leamon, James S. Revolution Downeast: The War for American Independence in Maine.Amherst: University of Massachusetts Press, 1993. - Marini, Stephen A. Radical Sects of Revolutionary New England.Cambridge, MA: Harvard University Press, 1982. - Taylor, Alan. Liberty Men and Great Proprietors: The Revolutionary Settlement on the Maine Frontier, 1760–1820.Chapel Hill: University of North Carolina Press, 1990. Published for the Institute of Early American History and Culture, Williamsburg, Virginia. - Williamson, William D. The History of the State of Maine: From Its First Discovery, A.D. 1602, to the Separation, A.D. 1820, Inclusive.Hallowell, ME: Glazier, Masters and Co., 1832. The Federalist Party The Federalist Party was dominated by a man who never actually ran for public office in the United States - Alexander Hamilton. "Alexander Hamilton was, writes Marcus Cunliffe, 'the executive head with the most urgent program to implement, with the sharpest ideas of what he meant to do and with the boldest desire to shape the national government accordingly.' In less than two years he presented three reports, defining a federal economic program which forced a major debate not only on the details of the program but on the purpose for which the union has been formed. Hamilton's own sense of purpose was clear; he would count the revolution for independence a success only if it were followed by the creation of a prosperous commerical nation, comparable, perhaps even competitive, in power and in energy, with its European counterparts." (fn: Marcus Cunliffe, The Nation Takes Shape, 1789-1837, (Chicago, 1959), 23.) (Linda K. Kerber, History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. p. 11) "Federalists created their political program out of a political vision. They had shared in the revolutionaries' dream of a Republic of Virtue, and they emerged from a successful war against empire to search for guarantees that the republican experiment would not collapse." (Kerber, p. 3) "The Federalist political demand was for a competent government, one responsible for the destiny of the nation and with the power to direct what that destiny would be. What was missing in postwar America, they repeatedly complained in a large variety of contexts, was order, predictability, stability. A competent government would guarantee the prosperity and external security of the nation; a government of countervailing balances was less likely to be threatened by temporary lapses in civic virtue, while remaining strictly accountable to the public will." (Kerber, p. 4) "So long as Federalists controlled and staffed the agencies of the national government, the need to formulate alternate mechanisms for party decision making was veiled; with a Federalist in the White House, Federalists in the Cabinet, and Federalist majorities in Congress, the very institutional agencies of the government would themselves be the mechanism of party. Federal patronage could be used to bind party workers to the Federalist 'interest.' 'The reason of allowing Congress to appoint its own officers of the Customs, collectors of the taxes and military officers of every rank,' Hamilton said, 'is to create in the interior of each State, a mass of influence in favor of the Federal Government.' (fn: Alexander Hamilton, 1782, quoted in Lisle A. Rose, Prologue to Democracy: The Federalists in the South, 1789-1800, (Lexington, Kentucky, 1968), 3.) Federalists though of themselves as a government, not as a party; their history in the 1790's would be the history of alignments within the government, rather than of extrernal alignments which sought to influence the machinery of government." (Kerber, p. 10) "Major national issues invigorated the process of party formation; as state groups came, slowly and hesitantly, to resemble each other. The issues on which pro-administration and anti-administration positions might be assumed increased in number and in obvious significance; the polarity of the parties became clearer." (Kerber, p. 11) "As Adams' presidential decisions sequentially created a definition of the administration's goals as clear as Hamilton's funding program had once done, the range of political ideology which called itself Federalist simply became too broad to the party successfully to cast over it a unifying umbrella. Federalists were unified in their response to the XYZ Affair, and in their support of the Alien and Sedition Acts, which passed as party measures in the Fifth Congress, but in little else. The distance between Adams and Hamilton - in political philosophy, in willingness to contemplate war with France, in willingness to manipulate public opinion - was unbridgable; Hamilton's ill-tempered anti-Adams pamphlet of 1800 would be confirmation of a long-established distaste." (Kerber, p. 14) "One result of the war was to add to Federalist strength and party cohesion. There were several varieties of Federalist congressional opinion on the war: most believed that the Republicans had fomented hard feeling with England so that their party could pose as defende of American honor; many believed that in the aftermath of what they were sure to be an unsuccessful war the Republicans would fall from power and Federalists would be returned to office . . . Regardless of the region from which they came, Federalists voted against the war with virtual unanimity." (Kerber, p. 24) "As an anti-war party, Federalists retained their identity as an opposition well past wartime into a period that is usually known as the Era of Good Feelings and assumed to be the occasion of a one party system. In 1816, Federalists 'controlled the state governments of Maryland, Delaware, Connecticut and Massachusetts; they cast between forty percent and fifty percent of the popular votes in New Jersey, New York, Rhode Island, New Hampshire and Vermont...Such wide support did not simply vanish...' (fn: Shaw Livermore, Jr. The Twilight of Federalism: The Disintegration of the Federalist Party 1815-1830, (Princeton, 1962), 265.) Rather, that support remained available, and people continued to attempt to make careers as Federalists (though, probably fewer initiated new careers as Federalists). Because men like Rufus King and Harrison Gray Otis retained their partisan identity intact, when real issues surfaced, like the Missouri debates of 1820, a 'formed opposition' still remained to respond to a moral cause and to oppose what they still thought of as a 'Virginia system.' Each of the candidates, including Jackson in the disputed election of 1824 had Federalist supporters, and their presence made a difference; Shaw Livermore argues that the central 'corrupt bargain' was not Adams' with Clay, but Adams' promise of patronage to Federalists which caused Webster to deliver the crucial Federalist votes that swung the election. If the war had increased Federalist strength, it also, paradoxically, had operated to decrease it, for prominent Federalists rallied to a beleaguered government in the name of unity and patriotism. These wartime republicans included no less intense Federalists than Oliver Wolcott of Connecticut and William Plumer of New Hampshire, both of whom went on to become Republican governors of their respective states, and in their careers thus provide emblems for the beginning of a one party period, and the slow breakdown of the first party system." (Kerber, p. 24) "The dreams of the Revolution had been liberty and order, freedom and power; in seeking to make these dreams permanent, to institutionalize some things means to lose others. The Federalists, the first to be challenged by power, would experience these contradictions most sharply; a party that could include John Adams and Alexander Hamilton, Charles Cotesworth Pinckney and Noah Webster, would be its own oxymoron. In the end the party perished out of internal contradiction and external rival, but the individuals who staffed it continued on to staff its succesors." (Kerber, p, 25) - History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. - The Revolution of American Conservatism: The Federalist Party in the Era of Jeffersonian Democracy. David Hackett Fischer. New York, 1965, Harper and Row. - The Age of Federalism: The Early American Republic, 1788-1800. Stanley Elkins and Eric McKitrick. New York, 1993, Oxford University Press. The Federalists were referred to by many monikers over the years by newspapers. - In 1809, The Concord Gazette refers to the Federalist Ticket as the American Ticket. - Beginning in 1810, the Newburyport Herald (MA), began referring to Federalists as the American Party (as opposed to the "French" Party, who were Republicans). This continued in the 1811 elections. The Aurora, based in Philadelphia, the most well-known Republican newspaper of the era (see American Aurora: A Democratic-Republican Returns by Richard N. Rosenfeld.) in the February 11, 1800 issue referred to Mr. Holmes, the losing candidate for the Special Election for the Philadelphia County seat in the House of Representatives as an "anti-republican". The October 7, 1799 issue of the Maryland Herald (Easton) referred to the Federalist ticket of Talbot County as Federal Republicans. It would continue to be used intermittently throughout the next 20 years. Newspapers that used this term included the Gazette of the United States (Philadelphia) and Philadelphia Gazette in 1800, the Newport Mercury in 1808, the New Bedford Mercury in 1810, the True American (Philadelphia) in 1812, the Northumberland Republican (Sunbury) in 1815, the United States Gazette (Philadelphia) in 1816 and the Union (Philadelphia) in 1821 and 1822. Friends of Peace / Peace / Peace Ticket: Beginning in 1812 ("In laying before our readers the above Canvass of this county, a few remarks become necessary, to refute the Assertion of the war party, that the Friends of Peace are decreasing in this country." Northern Whig (Hudson). May 11, 1812.) and continuing through to 1815 a number of newspapers referred to the Federalists as the Peace Party (or Peacemaker Party, as the Merrimack Intelligencer (Haverhill) of March 19, 1814 used), as the Peace Ticket or as the Friends of Peace due to their opposition of the War of 1812 (many of these same newspapers referred to the Republicans as the War Party). This use occurred all through at least August of 1815, with the Raleigh Minerva of August 18, 1815 referring to the Federalist candidates as Peace candidates. These newspapers include the Columbian Centinel (Boston), Merrimack Intelligencer (Haverhill), Providence Gazette, the New York Evening Post, the New York Spectator, the Commercial Advertiser (New York), Northern Whig (Hudson), the Broome County Patriot (Chenango Point), the Independent American (Ballston Spa), the Baltimore Patriot, the Alexandria Gazette, Poulson's, Middlesex Gazette (Middletown), the Political and Commercial Register (Philadelphia), Freeman's Journal (Philadelphia), the Carlisle Herald, Northampton Farmer, Intelligencer and Weekly Advertiser (Lancaster), National Intelligencer (Washington), The Federal Republican (New Bern), the Raleigh Minerva, The Star (Raleigh) and Charleston Courier. The New Hampshire Gazette (Portsmouth) took the opposite side, listing the Federalists in the March 16, 1813 edition as "Advocates of Dishonorable Peace and Submission." "The Tyranny of Printers": Newspaper Politics in the Early American Republic. Jeffrey L. Pasley. Charlottesville, 2001, University Press of Virginia. What is today referred to as the Democratic Republican Party did not exist as such under that name. "The party name which the Jeffersonians used most commonly in self-designation was Republican. Since nearly all Americans professed to be supporters of a republic, Federalists were reluctant to allow their opponents the advantage of this name, preferring to label them as Antifederalists, Jacobins, disorganizers, or, at best, Democrats." (Noble E. Cunningham, Jr., History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. p. 240.) "No precise date can be given for the establishment of the Republican party, for it did not spring suddenly into being, and even those leaders most intimately involved in its formation were not fully aware of what they were creating. The beginnings of what in course of time became the Republican party can be found in the Second Congress in the congressional faction that contemporaries referred to as the 'republican interest.' . . . An examination of roll calls during the Second Congress indicates that a voting bloc was forming around Madison in opposition to another bloc that united in support of Hamilton's program. While only about half of the membership of the House could be identified with one or the other of these factions, two such groups had not been observable in the First Congress." (Cunningham, p. 241) "As members of Congress defended their legislative records and sought reelection, they took to the electorate the issues and the disputes that had divided Congress, and they tended in their campaigns for reelection to impart to the voters something of the partisanship that was developing in Congress. Thus, the party divisions in Congress filtered down to the voters through the electoral process, and voters came to align along the lines that divisions in Congress had marked out. In this process the congressional factions acquired the mass followings in the county necessary to transform them from capital factions into national political parties." (Cunningham, p. 244) Though Thomas Jefferson was seen as the primary leader of the emerging Republican Party, his retirement in 1793 would force that mantle back upon James Madison. "Contemporaries referred to 'Madison's party,' and, when Jefferson was put forward for the presidency in 1796, he was recognized as the candidate of Madison's party. Adams's supporters warned that 'the measures of Madison and Gallatin will be the measures of the executive' if Jefferson were elected. Under Madison's leadership, the Republican party in Congress moved from a role characterized largely by opposition to administration measures, mostly Hamiltonian inspired, to one of offering policy alternatives and proposing Republican programs." (Cunningham, p. 246) "As the country became dangerously polarized, the Federalists, in 1798 with the passage of the Alien and Sedition Laws, used the full power of the government in an effort to destroy their opponents, whom they saw as subversive. The Republicans, forced to do battle for their very survival, were compelled to change their strategy radically. Prior to 1798 they had optimistically believed that the people would repudiate leaders who supported antirepublican measures hostile to the general good of society. By 1798, however, the Federalists' electoral successes and their hold on the federal government seemed to belie that belief. Therefore, the Republicans shifted their focus of attention from the national to the state level. And by emphasizing a more overtly, self-consciously sectional, political enclave strategy, they left the clear implication that state secession and the breakup of the union might follow if the federal government refused to modify its policies and actions to make them more acceptable to opponents, especially Southerners." (American Politics in the Early Republic: The New Nation in Crisis. James Roger Sharp. New Haven, 1993, Yale University Press. p. 12) "On the national level, Republican members of Congress through their informal associations in the national capital formed the basic national party structure. Many of them lodged together in boarding houses or dined together in small groups where there were ample opportunities to plot party tactics. They kept in close touch with political leaders and party organizations in their home states. In 1800, Republican members introduced what was to become the most important element of national party machinery and the most powerful device for the maintenance of congressional influence of the leadership of the party: the congressional nominating caucus." (Cunningham, p. 252) "The coming to power of the Jeffersonians in 1801 marked the beginning of the Republican era that saw the presidency passed from Jefferson to Madison to Monroe. When the Virginia dynasty came to an end in 1825, the presidential office went to a former Federalist who had become a Republican while Jefferson was president. But, although John Quincy Adams was a Republican, the presidential election of 1824 shattered the Republican party and destroyed the congressional nominating caucus which had given direction to the party's national structure since 1800. Adams's presidency was a period of restructuring of parties - a transitional period from the first party system of the Federalists and the Jeffersonians to the second party system of the age of Jackson." (Cunningham, p. 258-259). "During the period from its rise in the 1790's to its breakup in the 1820's, the Jeffersonian Republican party made contributions of major significance to the development of the american political system. It demonstrated that a political party could be successfully organized in opposition to an administration in power in the national government, win control over that government, and produce orderly changes through the party process. In challenging the Federalist power, Republicans were innovative in building party machinery, organizing poltical campaigns, employing a party press, and devising campaign techniques to stimulate voter interest in elections and support of republican candidates at the polls. In the process, it became acceptable for candidates to campaign for office and for their partisans to organize campaign committees, distribute campaign literature, see that voters get to the polls, and adopt other practices which, though subsequently familiar features of american political campaigns, previously had been widely regarded with suspicion and distrust. Many of the methods of campaigning and the techniques of party organization, introduced by the Jeffersonian Republicans, while falling into disuse by the end of the Republican era, would be revived by the Jacksonians. In taking office in 1801, the Jeffersonians led the nation through the first transfer of political power in the national government from one party to another; and Jefferson demonstrated that the president could be both the head of his party and the leader of the nation." (Cunningham, p. 271) - History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. - American Politics in the Early Republic: The New Nation in Crisis. James Roger Sharp. New Haven, 1993, Yale University Press. - Partisanship and the Birth of America's Second Party, 1796-1800: "Stop the Wheels of Government". Matthew Q. Dawson. Westwood, CT, 2000, Greenwood Press. - Party of the People: A History of the Democrats. Jules Witcover. New York, 2003, Random House Beginning in 1799, many Federalist papers began to refer to the Republican Party as Democrats or the Democratic Party. This continued throughout the first quarter of the 18th Century until what is currently known as the Democratic Party emerged among the followers of Andrew Jackson in the 1828 Presidential Election. Republicans were also called by a variety of different terms in various newspapers throughout the period: Though the Anti-Federalists were not quite the exact same group as the Republicans as they would develop after 1792, there were still some of those who referred to them as such. The term was used by the following newspapers in the following elections: - Porcupine's Gazette (Philadelphia). October 22, 1798. Pennsylvania 1798 Assembly, Chester County. - Virginia Gazette (Richmond). April 30, 1799. Virginia 1799 House of Delegates, New Kent County. - The Virginia Federalist (Richmond). April 26, 1800. Virginia 1800 House of Delegates, Norfolk County. - Virginia Gazette (Richmond). May 12, 1802. Virginia 1802 House of Delegates, Bedford County. - Virginia Gazette (Richmond). May 12, 1802. Virginia 1802 House of Delegates, Pittsylvania County. - The Salem Gazette. May 17, 1805. Massachusetts 1805 House of Representatives, Salem. Though the term is commonly used today to distinguish the Jeffersonian Republicans from the later Republican Party and because so many of those among the Jeffersonian Republicans eventually became Jacksonian Democrats, this term was extremely rare during the actual period. It was used by the Readinger Adler in the October 27, 1818 edition recording the 1818 county elections in Pennsylvania. French / War / Warhawk / Jacobin: Starting in 1798, various Federalist newspapers would refer to Republicans as Jacobins. ("In Newbern district the contest lay between two federalists -- No Jacobin had the effrontery to offer himself." United States Gazette. September 1, 1798.) These references continued through until at least 1810. ("From the Cooperstown Federalist: The election in this County has terminated in favor of the Jacobin Ticket for Assembly. An important revolution has been effected by the most shameful artifices. Never before were the jacobin ranks so completely formed and thoroughly drilled for action. We hope next week to be able to lay before our readers a correct statement of votes, and to exhibit to the world a picture of depravity in the conduct of some of the inspectors of the election which has no parallel." The American (Herkimer). May 3, 1810.) Beginning in 1810, the Newburyport Herald (MA), began referring to Republicans as the French Party (as opposed to the "American" Party, who were Federalists). This continued in the 1811 elections. Beginning in 1812 ("In laying before our readers the above Canvass of this county, a few remarks become necessary, to refute the Assertion of the war party, that the Friends of Peace are decreasing in this country." Northern Whig (Hudson). May 11, 1812.) and continuing through 1813 and 1814 a number of newspapers were referring to the Republicans as the War Party (or Warhawk Party, as the Merrimack Intelligencer (Haverhill) of March 19, 1814 used) due to their support of the Madison administration and the War of 1812 (most of these same papers referred to the Federalists as the Peace Party). These newspapers include the Trenton Federalist, the Columbian Centinel (Boston), the Northern Whig (Hudson), the Independent American (Ballston Spa), the Broome County Patriot (Chenango Point), the New York Spectator, the Commercial Advertiser (New York), the New York Evening Post, the Albany Gazette, the Political and Commercial Register (Philadelphia), the Merrimack Intelligencer (Haverhill), The Federal Republican (New Bern), the Freeman's Journal (Philadelphia), Alexandria Gazette, Poulson's, Middlesex Gazette (Middletown), the Raleigh Minerva and The Star (Raleigh). Jackson / Jacksonian: With the Presidential election of 1824 split among four candidates who were, ostensibly, members of the same political party, the divisions among the Republican Party began to be apparent. The phrase "Jackson" or "Jacksonian" candidate was used in nearly every state election in Georgia in 1824 to distinguish between those were were supporters of Andrew Jackson as opposed to the supporters of William H. Crawford. The Maryland Republican (Annapolis) and the Federal Gazette (Baltimore) used the term "Jacksonian" in the Cecil County elections of 1824 (as opposed to "Adamite" or "Crawfordite") and the Allegheny and Butler county election in Pennsylvania in 1824. The New Hampshire Gazette of March 5, 1816 would refer to the Republican ticket as the Whig Ticket and as being in favor of Peace and Commerce.
No links Please! I also want an explanation I will give brainliest u not sure i cant see it Herе's link tо thе аnswer: To find the third angle, we add the two known angles. 47 + 43 = 90 The sum of all the angles of a triangle is always 180. Therefore, we subtract 90 from 180 to find the third, unknown angle. 180 - 90 = 90 Because 90 degrees is the third angle, and 90 degrees is a right angle, this is a right triangle. mass of na = 115 g mass of nacl produced given reaction is- 2na(s) + cl2(g) → 2nacl(s) since cl2 is in excess, na will be the limiting reagent as per the reaction stoichiometry na: nacl = 1: 1 i.e. moles of na reacted = moles of nacl formed now, # moles of na = mass of na/atomic mass = 115 g/23 g.mol-1 = 5 moles therefore, moles of nacl = 5 molar mass of nacl = 58 g/mol mass of nacl = 5 moles * 58 g.mol-1 = 290 g ans: amount of nacl produced = 290 g
This page provides a sociological definition of otherness and how it works in societies. I will also include examples and resources for people interested in learning more about otherness. I will add to this page over time. The idea of ‘otherness’ is central to sociological analyses of how majority and minority identities are constructed. This is because the representation of different groups within any given society is controlled by groups that have greater political power. In order to understand the notion of The Other, sociologists first seek to put a critical spotlight on the ways in which social identities are constructed. Identities are often thought as being natural or innate – something that we are born with – but sociologists highlight that this taken-for-granted view is not true. Rather than talking about the individual characteristics or personalities of different individuals, which is generally the focus for psychology, sociologists focus on social identities. Social identities reflect the way individuals and groups internalise established social categories within their societies, such as their cultural (or ethnic) identities, gender identities, class identities, and so on. These social categories shape our ideas about who we think we are, how we want to be seen by others, and the groups to which we belong. George Herbert Mead’s classic text, Mind Self and Society, established that social identities are created through our ongoing social interaction with other people and our subsequent self-reflection about who we think we are according to these social exchanges. Mead’s work shows that identities are produced through agreement, disagreement, and negotiation with other people. We adjust our behaviour and our self-image based upon our interactions and our self-reflection about these interactions (this is also known as the looking glass self). Ideas of similarity and difference are central to the way in which we achieve a sense of identity and social belonging. Identities have some element of exclusivity. Just as when we formally join a club or an organisation, social membership depends upon fulfilling a set of criteria. It just so happens that such criteria are socially-constructed (that is, created by societies and social groups). As such ‘we’ cannot belong to any group unless ‘they’ (other people) do not belong to ‘our’ group. Sociologists set out to study how societies manage collective ideas about who gets to belong to ‘our group’ and which types of people are seen as different – the outsiders of society. Zygmunt Bauman writes that the notion of otherness is central to the way in which societies establish identity categories. He argues that identities are set up as dichotomies: Woman is the other of man, animal is the other of human, stranger is the other of native, abnormality the other of norm, deviation the other of law-abiding, illness the other of health, insanity the other of reason, lay public the other of the expert, foreigner the other of state subject, enemy the other of friend (Bauman 1991: 8). The concept of The Other highlights how many societies create a sense of belonging, identity and social status by constructing social categories as binary opposites. This is clear in the social construction of gender in Western societies, or how socialisation shapes our ideas about what it means to be a “man” or a “woman.” There is an inherently unequal relationship between these two categories. Note that these two identities are set up as opposites, without acknowledging alternative gender expressions. In the early 1950s, Simone de Beauvoir argued that Otherness is a fundamental category of human thought. Thus it is that no group ever sets itself up as the One without at once setting up the Other over against itself. de Beauvoir argued that woman is set up as the Other of man. Masculinity is therefore socially constructed as the universal norm by which social ideas about humanity are defined, discussed and legislated against. Dichotomies of otherness are set up as being natural and so often times in everyday life they are taken for granted and presumed to be natural. But social identities are not natural – they represent an established social order – a hierarchy where certain groups are established as being superior to other groups. Individuals have the choice (or agency) to create their identities according to their own beliefs about the world. Yet the negotiation of identity equally depends upon the negotiation of power relationships. As Andrew Okolie puts it: Social identities are relational; groups typically define themselves in relation to others. This is because identity has little meaning without the “other”. So, by defining itself a group defines others. Identity is rarely claimed or assigned for its own sake. These definitions of self and others have purposes and consequences. They are tied to rewards and punishment, which may be material or symbolic. There is usually an expectation of gain or loss as a consequence of identity claims. This is why identities are contested. Power is implicated here, and because groups do not have equal powers to define both self and the other, the consequences reflect these power differentials. Often notions of superiority and inferiority are embedded in particular identities (2003: 2). Social institutions such as the law, the media, education, religion and so on hold the balance of power through their representation of what is accepted as “normal” and what is considered Other. British sociologist Stuart Hall argues that visual representations of otherness hold special cultural authority. In Western countries with a colonial history, like the UK, Australia and the USA, whether difference is portrayed positively or negatively is judged against the dominant group – namely White, middle-to-upper class, heterosexual Christians, with cis-men being the default to which Others are judged against. The notion of otherness is used by sociologists to highlight how social identities are contested. We also use this concept to break down the ideologies and resources that groups use to maintain their social identities. Sociologists are therefore interested in the ways in which notions of otherness are managed in society. For example, we study how some groups become stigmatised as outsiders, and how such ideas change over time. As Dutch-American sociologist Philomena Essed argues, the power of othering includes opting out of “seeing” or responding to racism. This article was first published on 14 October 2011 and it is a living document, meaning that I will add to it over time. Here are some of the texts that have influenced my understanding of otherness. Although the concept of “otherness” may not be specifically referenced in these studies, and some of these works cut across several fields of otherness, these authors make an important contribution to the sociology of minority groups. These texts speak to the historical, cultural and discursive processes through which The Other is constructed in Western contexts. The Other is set up against the hegemonic “universal human being” – that is, white, middle class, heterosexual, able-bodied cis-men. - Simone de Beauvoir, The Second Sex. (France) - Floya Anthias and Nira Yuval-Davis, Racialised Boundaries: Race, Nation, Gender, Colour and Class and the Anti-Racist Struggle. (UK) - bell hooks, Feminist Theory: From Margin to Centre. (USA) - bell hooks, Black Looks: Race and Representation. (USA) - Gill Bottomley, Marie De Lepervanche, Jeannie Martin (Eds), Intersexions: Gender/Class/Culture/Ethnicity. (Australia) Race and Culture - Ruth Frankenberg, White Women, Race Matters: The Social Construction of Whiteness. (USA) - Ghassan Hage, White Nation: Fantasies of White Supremacy in a Multicultural Society (Australia) - Stuart Hall, Representation: Cultural Representations and Signifying Practices (Culture, Media and Identities Series). (UK) - Paul Gilroy, The Black Atlantic: Modernity and Double-Consciousness. (UK) - Paul Gilroy, ‘There Ain’t no Black in the Union Jack’: The Cultural Politics of Race and Nation. (UK) - Peggy McIntosh, White Privilege: Unpacking the Invisible Knapsack. (USA) - Margaret Wetherell and Jonathan Potter, Mapping the Language of Racism: Discourse and the Legitimation of Exploitation (New Zealand) - Judith Butler, Gender Trouble: Feminism and the Subversion of Identity. (USA) - Michel Foucault, The History of Sexuality. (France) - Adrienne Rich, Compulsory Heterosexuality and Lesbian Experience. (USA) - Comprehensive List of LGBTQ+ Term Definitions. Sam Killermann for Everyday Feminism. - Edward W. Said. Orientalism. (USA, on Islam) - Gary Bouma. Gender and Religious Settlement: Families, Hijabs and Identity. (Australia, on Islam) - Gary Bouma. Australian Soul: Religion and Spirituality in the 21st Century. (Australia) - *My research focus is on Islam More texts to come… To cite this article: Zevallos, Z. (2011) ‘What is Otherness?,’ The Other Sociologist, 14 October. Online resource: https://othersociologist.com/otherness-resources/
How cells translate signals from surroundings into internal signals Every organism has one aim: to survive. Its body cells all work in concert to keep it alive. They do so through finely tuned means of communication. Together with cooperation partners from Berlin and Cambridge, scientists at the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg have now successfully revealed for the first time the laws by which cells translate signals from their surroundings into internal signals. Like an isolated note in a symphony orchestra, an isolated signal in the cell is of subordinate importance. "What is important is the relative variation of intensity and frequency at which the signals are transmitted from the cell membrane into the cell," says Dr. Alexander Skupin, who led the studies at LCSB. The research group published their results now in the scientific journal Science Signaling. The instruments in an orchestra produce signals – musical notes – by causing the air to vibrate. Inside a cell, calcium ions carry signals. When a piece of information from the environment – say a biological messenger – meets the outer envelope of the cell, calcium ions are released inside the cell. There, they control various adaptation processes. "At first sight, there is no simple pattern to the ion impulses," Skupin explains; "yet they still culminate in a meaningful response inside the cell, like the activation of a specific gene, for instance." In order to determine the laws underlying this phenomenon, the researchers studied human kidney cells and rat liver cells using a combination of imaging technologies and mathematical methods. They discovered that the intensity and frequency of calcium impulses undergo extreme variation – both cell-internally and cell-to-cell. Accordingly, the information they convey cannot be interpreted by analyzing isolated signals alone. "It's like in an orchestra, where studying an isolated note on its own allows no inference of the melody," Skupin continues the musical analogy. "You have to hear how the frequency and volume of all instruments vary and produce the melody. Then you gain an impression of the musical piece." Now, for the first time, the researchers have managed to gain such an impression of the whole by listening in on the cells' communications. They discovered that the plethora of calcium impulses vary relatively to one another in a specific relationship: A stimulus from outside does not lead to an absolute increase in calcium impulses, but instead to a change in the frequency at which they occur – in the concert hall, the notes of the instruments rise and fall in symphony. "This pattern is the actual signal that leads to a response in the cells," Skupin says. "With our analyses, we have rendered it interpretable." "The results are of great importance for analyzing diseases," says Director of LCSB Prof. Dr. Rudi Balling. "We know that, in Parkinson's disease, the calcium balance in the nerve cells is disrupted, and suspect that errant communications between the cells could play a role in the onset of neurodegenerative diseases. With the discovery of the fundamental laws of these communications, as Alexander Skupin, his team and our cooperation partners have now achieved, we are set to take a major step forward in the analysis of Parkinson's disease."
: Scarcity of water is a challenge worldwide because of growing population and Industrialization. Billions of people have insufficient access to safe drinking water. Ground water levels are falling and all type of water bodies like river, lake and oceans are getting polluted. Many issues resulting in water scarcity could be avoided with better water management. A better option is reuse and recycles the wastewater for secondary purposes like toilet flushing, gardening, lawn and irrigation. Wastewater has high Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD) and contains Total Suspended Solid (TSS), Nitrogen (N), Phosphorous (P), alkaline in nature. Conventional wastewater treatment goes through primary, secondary and tertiary treatment which is expensive to build, operate and maintain. Wastewater should be treated & reused such that treatment should be economical, natural and not affecting the environment. The best option is to provide onsite wastewater treatment by using geology of wetland for clean & hygienic villages. Wetlands are parts of earth’s surface between terrestrial and aquatic system. Wetlands are generally shallow in depth which includes water, soil and vegetation. There are two types of Wetlands like Natural and Constructed wetland. Selection of location of natural wetlands is dependent on various geological properties. In natural wetland, control on process is difficult but in constructed wetland, we can control the process of treatment. Constructed Wetland is an artificial wastewater treatment, consisting of shallow ponds (<1 meter depth). Water Hyacinth (Eichronia crassipes) is available locally It is large, bulbous floating plants with extensive root system, perennial aquatic plant with rounded, upright, shiny green leaves and spikes of lavender flower. It is good in nutrient removal from wastewater through the harvesting, prevents the growth of algae and maintaining pH value . The root zones of plants develop into a diverse ecology which includes bacteria, fungi, predators and filter feeders for creating aerobic conditions. Constructed wetlands provide habitat for wildlife and helps to improve aesthetic value.
While referring a gemstone, it can be colloquially called a garnet, but it is never called a pyrope by itself. It is usually called a pyrope garnet. A pyrope comes from the combination of two ancient Greek words, pur and ōps, that mean “fire” and “eye” respectively. As for the word Garnet, it comes from the Latin word granatus which means “seed-like” as it is usually found as a small rounded crystal stuck in the rock. Ant Hill Garnet Highlights - It can be found on Navajo Nation land, and is not mined commercially. - Navajo women pick the gemstones from the gravel carried by ants in the process of building their nests. - This phenomenon of ants carrying the gemstones to the surface is believed to be peculiar to the southwestern harvester ant due to the high density of gemstone deposits in the American southwest region. - One legend surrounding the gemstone details that Native Americans used the ant hill garnet for bullets although it has never been confirmed. - Ant hill garnet belongs to the Pyrope group of garnets having aluminum as their second element. - Ant hill garnet is best suited for protecting and linking the Base (1st) and Crown (7th) Chakras. The other group includes uvarovite, grossular, and andradite, which has calcium as their second element. Members of each group are often found blending together within their prescribed group, and it is very rare to find a mixture between these two groups. The gemstones are used by geologists to determine the temperature and pressure in which the garnet-bearing rock was formed. Ant Hill Garnet Throughout History The Native Americans Prior to the arrival of the Spanish, the main mode of transportation for Native Americans was by foot. They paid close attention to the ground of their country. They collected these gemstones, though it is unclear at what point they began to do so. By the mid-1500s, the gemstones were already in circulation among the Pueblos of New Mexico. The Spanish reported finding turquoise, peridot, emerald, and garnet. Roughly three centuries later, the gemstones were discovered by the Americans, after the Mexican-American war. Fort Defiance would be established in 1851 in Navajo country, and the soldiers that were stationed there reported the abundance of the gemstones found in the area. Today, many of these mineral deposits are located on Navajo Nation land in Arizona. It is for this reason that they are not commercially mined, but are rather collected exclusively by the Native American residents. The Navajo woman collect these gemstones from the ant hills and then they are sold. Sid Tucker LLC is the main dealer of ant hill garnets. Uses and Benefits of Ant Hill Garnet Improve Blood Circulation Ant hill garnet has been used to improve the blood circulation and to reduce the negative effects of blood disorders. It helps bolster one’s immune system, defending an individual against influenza. Moreover, it is also used in elixirs to help promote skin defense and relieve irritation. Help Overcome Anxiety The pyrope garnet can also be an effective ally in helping one overcome anxiety. Sometimes anxiety arises due to certain social situations, and as such, can make someone feel awkward. The stone can help one obtain a certain level of composure to overcome difficult situations. Protect and Align the 1st and the 7th Chakras Ant hill garnet can be used to help link the Base (1st) and Crown (7th) Chakras. It allows one to stay grounded, simultaneously maintaining the spiritual wisdom associated with the Crown Chakra. When used in mediation, it facilitates inspiration and helps one trust their own intuitions fully, for acquiring knowledge. Disclaimer: The information provided here is for entertainment and reference purposes only. It is based on centuries of folklore, most of which came about before the age of modern medicine. It is not meant as actual medical information. For advice about any of the illnesses listed, please visit a qualified physician.
Toronto public schools have major and rising student achievement gaps based on race and income, according to a recent landmark report. One of the biggest blocks to closing these gaps is educators’ understanding of why these gaps exist and the methods used to try and close them. Last summer, education researchers, community partners and teachers gathered to address such reports of inequality. One of the main issues discussed was how identity-based data helps to locate and remove systemic barriers. The action plan for Ontario, which aims to make sure every student has the opportunity to succeed, “regardless of background, identity or personal circumstances,” includes an analysis of identity-based data. Researchers have demonstrated that in Toronto public schools, Black, racialized and lower-income students face significant gaps in student outcomes. Other reports show gaps as high as 30 per cent on standardized test scores. Lower socioeconomic groupings of Black, Middle Eastern, Indigenous and Latino boys were among those most impacted by the achievement gap. On top of this, racialized students feel less comfortable at school. Black, Latino and (racially) mixed students from lower socioeconomic groups reported lower levels of school satisfaction than all other racial groups. These students felt less comfortable participating in class than students in higher socioeconomic groups. This data could help Ontario school boards not only identify issues, but also change the systems and structures that cause achievement and opportunity gaps for underserved groups of students. Factor in historical injustices For decades, researchers in the United States have used identity-based data to identify achievement gaps between groups of students based on race, gender, language, ability, sexuality and other social identities. Research attention then turned to opportunity gaps. This framing considers historical structural barriers in schools that produce educational inequities. So instead of focusing on deficits in students, the research focuses on systemic issues such as economic resources, racism and embedded practices in policies. This research shift was promising, but most discussions of opportunity gaps still fell short. They generally consider only the distribution and access to material goods within different schools, and fail to account for other opportunity gaps denied to students both inside and outside of school, including present-day and historical inequities. Challenge traditional ways of thinking As a former TDSB lead teacher in the Model Schools for Inner Cities (MSIC) Program designed to close gaps, and later, as a researcher who studied the MSIC program, I have some insight into how we might begin to tackle these issues in Ontario. The MSIC program was launched in 2004 to support schools whose students faced the greatest barriers to success. My research analyzes how stakeholder groups like MSIC staff, community partners, district-level staff, school trustees and school principals in the MSIC program made sense of opportunity gaps. I interviewed people from the stakeholder groups and analyzed program documents to gauge their understanding of the program and how their analysis shifted over a decade. Participants mostly agreed on the purpose of the program (to close opportunity gaps), but they had dramatically different ways of thinking about those gaps. The two different approaches that emerged are affirmative versus transformative. These are categories defined in the context of international development by political theorist Nancy Fraser. The affirmative approach emphasizes fixing or saving students. This method tends to use language like “empower.” The transformative approach focuses on addressing inequitable systemic barriers as well as challenging ways of thinking that maintain opportunity gaps. This method tends to use language like “support” and “affirm.” These two different approaches to opportunity gaps lead to very different practices, policies and initiatives. Affirmative approaches saw students and families in the MSIC program as “in need,” while positioning the program as the “saviour.” Transformative approaches positioned the program as temporary support that aimed to work itself out of existence. Underserved communities were understood to have abundant social, political and cultural resources and agency to ensure their children’s success. Affirmative approaches work to ensure all students have access to the same experiences and material goods. Equal access to nutrition, technology and health services is also essential in transformative approaches. However, a transformative approach believes opportunity gaps are not fixed by just providing equal resources. Programs should also work to affirm students’ identities. In other words, schools should develop curriculum, field trips and extracurricular activities based on the students’ lived experiences, interests and aspirations. Injustices can be addressed by the redistribution of goods, but recognition and representation matter as well. Affirmative approaches provide parents with opportunities to network, learn about parenting and build workforce skills within the confines of board structures. Transformative approaches work with parents and caregivers to advocate for their rights and navigate the educational system to support their children. Teach students to engage critically Affirmative approaches are related to the purpose of achieving excellence, in teaching and learning, generally in the form of standardized test scores. Transformative approaches view equity as a prerequisite for excellence, but excellence is not the main point of education. The main point is to support students in engaging critically in a democratic society. As Ontario school boards begin their project of collecting identity-based data, and as the boards work towards closing the achievement and opportunity gaps, policy-makers and school leaders will need to focus on transformative approaches. Their work needs to understand the relationships between historical injustices and student achievement, engagement and well-being today.
$109 or Dr. Office copay The flu is caused by influenza viruses which affect the respiratory system, including the nose, throat and lungs. It is a contagious illness that may cause mild to severe illness and, in more extreme cases, can even cause complications that lead to death. Seniors, young children and people with certain health conditions are at higher risk of developing complications from the flu. In the U.S., an estimated 200,000 people are hospitalized and 36,000 die from the disease each year. The Difference between a Cold & the Flu The common cold tends to slowly worsen over two to three days, while the flu comes on suddenly and rapidly worsens over the first one to two days. Sore throat, nasal symptoms, cough and congestion are classic symptoms of a cold. Fever is more common in flu, especially if it is high. Body aches and fatigue also tend to be more common with the flu. Some children may experience vomiting and diarrhea. Initially, it can be very difficult to tell the difference between a cold and the flu. RediClinic offers an instant flu test to help differentiate and treat your illness appropriately. How Does the Flu Spread? Flu virus spreads primarily when people infected cough, sneeze or talk. As with colds, the flu can be contracted by touching an object that has the flu virus on it and then touching the mouth, eyes or nose. Influenza can be passed from one person to another even before symptoms appear. It is hard to tell the difference between a viral or bacterial cause of respiratory illnesses on the basis of symptoms alone. The Rapid Influenza Diagnostic Test can instantly determine whether you have a cold or the flu, so your RediClinic clinician can provide the appropriate treatment. Let us help you get healthy! Influenza antiviral drugs may be used to lessen the severity of flu illness, but they should be considered a second line of defense against the flu. The best way to help prevent the flu is by getting a flu vaccine each year. RediClinic offers flu vaccines to help protect you and your family. It’s important to understand that you cannot get the flu from a flu shot. Learn more about flu vaccination here. - Very knowledgeable and professional employees that I have encountered today. Did not have to wait long at all. - YelpConvenient for a useful, but limited set of things. - Very simple, easy and effective treatment. In and out in 30 min. Thanks! - Wonderful experience! - YelpGreat staff and great service. The staff was helpful, polite and the doctor was attentive to my needs. At RediClinic, we offer Get Healthy services to diagnose, treat, and prescribe medications (when appropriate) for common illnesses and injuries in adults and children over the age of 18 months. At RediClinic, you will always receive the best care from our qualified clinicians. Some of the Live Healthy services we provide are Physicals, Vaccinations, Health Screenings, Diabetes Testing, etc.
When Wilbur and Orville Wright’s famous airplane, the Wright Flyer, first flew in 1903 it must have made quite a racket, with its crude gasoline engine spinning twin propellers via drive chains. Nearly 115 years later, another type of plane has taken flight as quiet as a ghost, without a single moving part. The new type of aircraft could usher in silent drones and perhaps far simpler planes—if researchers can overcome the daunting task of scaling up the technology. Instead of relying on a propeller or a jet engine, the plane, about the size of a single-person kayak, pushes itself through the air using electroaerodynamics (EAD). This form of propulsion uses electric effects to send air backward, giving the plane an equal push forward. Aeronautical engineers have long theorized that planes could be powered by EAD, says Steven Barrett, an aeronautical engineer at the Massachusetts Institute of Technology (MIT) in Cambridge. But no one had ever constructed an EAD plane capable of lifting its own weight. When Barrett and colleagues finally succeeded, they stood in awed silence, he says. "It had taken about 7 years of work just to get off the ground.” In an EAD propulsion system, a strong electric field generates a wind of fast-moving charged particles called ions, which smack into neutral air molecules and push them behind the plane, giving the aircraft a push forward. The technology—also called ion drive, ion wind, or ion propulsion—has already been developed for use in outer space by NASA, and is now deployed on some satellites and spacecraft. Because space is a vacuum, these systems bring along a fluid, like xenon, to ionize, whereas Barrett’s aircraft is designed to ionize nitrogen molecules in the ambient air. It’s far easier to deploy ion drive in space than in the atmosphere, however. Gravity guides a satellite around the planet, with ion drive applying small course corrections. In contrast, a plane must produce enough thrust to keep itself aloft and to overcome the constant drag of air resistance. After running multiple computer simulations, Barrett’s team settled on a design for a plane with a 5-meter wingspan and a mass of 2.45 kilograms, about the weight of a chicken. To generate the needed electric field, sets of electrodes resembling Venetian blinds run under the plane’s wings, each consisting of a positively charge stainless steel wire a few centimeters in front of a highly negatively charged slice of foam covered in aluminum. The plane also carries a custom battery stack and a converter to ramp the voltage from the batteries from about 200 volts to 40 kilovolts. Although the highly charged electrodes were exposed on the plane’s frames, they could be turned on and off by remote control to avoid safety risks. The team tested the airplane inside a gymnasium at MIT, working at odd hours to avoid running into sports teams. “There were some pretty epic crashes,” Barrett says. Eventually, the team devised a slingshotlike apparatus to help launch the aircraft. After hundreds of failed attempts, the aircraft was finally able to propel itself enough to remain airborne. Over 10 test flights, the plane flew up to 60 meters, a little farther than the Wright brothers’ first flight, in about 10 seconds, with an average altitude of half a meter, the researchers report this week in Nature. “This is a great first step,” says Daniel Drew, an electrical engineer at the University of California, Berkeley, who is working on EAD microrobots and was not involved with the study. However, he cautions “if they try to go much bigger with the plane size, they’re going to run into a lot of issues.” The basic problem comes down to scaling, Drew says. As the size of the plane increases, its weight will grow faster than the area of its wings. So to stay aloft, a bigger plane must produce much more thrust per unit of wing area, he explains, something that “would be extremely difficult to achieve from a physics standpoint.” Barrett isn’t ready to rule out the possibility of one day transporting humans. “We’re still a long way off obviously, and there’s a lot of things we need to improve to get there,” he says, “but I don’t think there’s anything that makes it fundamentally impossible.” Thrust could be improved by making the power converter system and the batteries more efficient, testing different strategies for creating ions, or integrating the thrusters into the plane’s frame to reduce drag, he says. Franck Plouraboué, a fluid mechanics researcher at France’s national research agency CNRS and the University of Toulouse, says one way to power EAD aircraft could be through ultralight solar panels attached to the top of the plane. Drew thinks we’re more likely to one day see a swarm of smaller EAD aircraft. In that context, Barrett thinks the biggest advantage of EAD aircraft will be the lack of noise. “If we want to use drones all around our cities for delivering things and monitoring air quality, all that buzzing and noise pollution would get quite annoying.”
Rules for writing Informal letters: Write your full name and address even if it is an informal letter. Divide your letter in small paragraphs. Keep your writing simple. Make a good choice of words especially if you are writing an apology letter or a letter to express your condolences in case of a death. This table gives the Greek letters, their names, equivalent English letters, and tips for pronouncing those letters which are pronounced differently from the equivalent English letters. (There are actually several acceptable ways to pronounce New Testament Greek. For the gory details, look here.) Sigma (s, V):There are two forms for the letter Sigma. When written at the end of a word, it is. Even many English words have Greek root words. Writing and translating Greek is challenging since Roman block letters, common in English, are not used. The pronunciation of even familiar-looking letters differs from what is encountered in English. Greek is a beautiful language, however, and is worth learning, even if it is just a few phrases. The Greek alphabet has been used to write the Greek language since the 9th century BC. It has 24 letters, many of which English-speakers can recognize. The letters that English does not possess are the Phi, Chi, Psi, and Theta. All the other letters neatly correspond to a single English letter, although many of the Greek letters are not written the same way as their English counterparts. For. Greek letters were also used for writing Greek numerals. The first nine letters (from alpha to theta) were used for the numbers 1 to 9. The next nine letters (from iota to koppa) were used for multiples of 10 from 10 to 90. Finally, the next nine letters (from rho to sampi) were used for 100 to 900. For example, the numbers 1, 2, and 3 are alpha, beta, and gamma. For more casual and informal letters like thank you notes, it’s enough to include the date and your name, or often just the date. Writing Letters in English: 5 Essential Letters You Need to Know. Your envelopes are addressed and ready. You’ve filled out an appropriate heading. Now it’s time to write the actual letters. Ready, set, write. Alpha and beta, the words for the first two letters of the Greek alphabet, were combined—in Greek, Latin, Middle English, and Modern English consecutively—to denote a set of letters, constituting a language’s written system, arranged in a traditional order. The first and last letters, alpha and omega, also have a resonance in Christianity, as the Bible has God referring to himself as. Greek Handwriting — Handwritten letters in Greek. This page is part of the author’s set of pages on the Greek language. Instructions for hand-writing the letters of the Greek alphabet are given below. Each letter is given in its capital form on the left, and lowercase form on the right. When the Greek letter is identical to some Roman one (even if the Greek letter stands for a different. Greek numerals, also known as Ionic, Ionian, Milesian, or Alexandrian numerals, are a system of writing numbers using the letters of the Greek alphabet.In modern Greece, they are still used for ordinal numbers and in contexts similar to those in which Roman numerals are still used elsewhere in the West. For ordinary cardinal numbers, however, Greece uses Arabic numerals. How Greek numbers worked. The symbols that the Greeks used were their letters. They are listed below with their sounds. Unfortunately, this method of counting needs 27 letters, and there were only 24 in the Classical Greek alphabet. This meant that the Greeks had to find 3 extra symbols for the missing numbers of 6, 90 and 900. They used 3 archaic letters, which used to be in the alphabet but. Greek letters are widely used in mathematics and other fields of science. There are a couple of differences in pronunciation of the names of the letters between English and most other European languages, which is a common source of mistakes. That’s why in the following, I used a notation for pronunciation that should be easy to understand for non-native speakers, but native speakers should. The design was letters greek write to how in with english used to reinforce each other. Azodi, 2006, p. Ii iv example 5. 5. Example 6. 10 348 16 6razil 303,43 1,1020,734 736,373 5. 71 319 16 switzerland 305,772 7,133,929 726,563 19. Lillis 1998, p. 34. Each focus group as in the future. The key to learning and engage- ment that will let readers. Transliteration of English Text into the Greek Alphabet An apparently overlooked aid for students learning a new language that uses a different alphabet, such as Russian or Greek, is to provide them with text in their native language but written in the new alphabet. This would give them practice pronouncing words in the new alphabet and allow them to directly convert in their minds the symbols. Greek Script Writing. This page allows you to write your name or a text in English and have it transliterated into Greek. Simply write in English, once you press SPACE or hit ENTER you will see the phonetics of what you wrote in Greek. Also don't forget to check the main page for more lessons here: Learn Languages. Bookmark this page! Greek Script Writing. Instructions: Try to write your name. Learning to write the Greek letters and how to pronounce them is introduced in this lesson. Mastering the sight and sounds of the alphabet lays the cornerstone for learning the sight and sounds of Greek words in all subsequent lessons. Your first step toward learning NTGreek is to memorize the Greek alphabetical characters and the order in which they occur in the alphabet. You are sowing the. How to Write the Letters of the Greek Alphabet. Learn how to write the lower-case letters of the Koine Greek Alphabet. Watch how they're formed and then download the Greek handwriting worksheets to practise. Click on one of the Greek letters to see how it is written. Click a Greek letter. CLOSE. PRINT. YOU MIGHT ALSO LIKE: ADS BY GOOGLE: If you would like to keep up-to-date when we add new. English Alphabet. An alphabet is a set of letters or symbols that we use to represent the basic speech sounds of a language in writing. This page looks at writing the English alphabet.You can read about pronouncing the English alphabet here. Greek alphabet, writing system that was developed in Greece about 1000 bce.It is the direct or indirect ancestor of all modern European alphabets. Derived from the North Semitic alphabet via that of the Phoenicians, the Greek alphabet was modified to make it more efficient and accurate for writing a non-Semitic language by the addition of several new letters and the modification or dropping of. Watch this video Biblical Greek Alphabet Pronunciation to learn how the letters should be pronounced. Upper case appears in the beginning of a paragraph, direct speech, proper names, geographical locations and names of nations.
An important part of the CMAA is supporting the County Alliances with health programs that benefit their local communities. In addition, the CMAA develops and promotes health projects on a statewide level. Keep Your Cool in Hot Weather Learn about heat-related illness and how to stay cool and safe in hot weather. Now is the time to prepare for the high temperatures that kill hundreds of people every year. Extreme heat caused 7,415 heat-related deaths in the United States from 1999 through 2010. Heat-related deaths and illness are preventable, yet many people die from extreme heat each year. Take measures to stay cool, remain hydrated, and keep informed. Getting too hot can make you sick. You can become ill from the heat if your body can’t compensate for it and properly cool you off. The main things affecting your body’s ability to cool itself during extremely hot weather are: - High humidity. When the humidity is high, sweat won’t evaporate as quickly, which keeps your body from releasing heat as fast as it may need to. - Personal factors. Age, obesity, fever, dehydration, heart disease, mental illness, poor circulation, sunburn, and prescription drug and alcohol use can play a role in whether a person can cool off enough in very hot weather. People age 65 and older are at high risk for heat-related illnesses. Those who are at highest risk include people 65 and older, children younger than two, and people with chronic diseases or mental illness. Closely monitor people who depend on you for their care and ask these questions: - Are they drinking enough water? - Do they have access to air conditioning? - Do they need help keeping cool? People at greatest risk for heat-related illness can take the following protective actions to prevent illness or death: - Stay in air-conditioned buildings as much as possible. Contact your local health department or locate an air-conditioned shelter in your area. Air-conditioning is the number one protective factor against heat-related illness and death. If a home is not air-conditioned, people can reduce their risk for heat-related illness by spending time in public facilities that are air-conditioned, and using air conditioning in vehicles. - Do not rely on a fan as your primary cooling device during an extreme heat event. - Drink more water than usual and don’t wait until you’re thirsty to drink. - Check on a friend or neighbor and have someone do the same for you. - Don’t use the stove or oven to cook—it will make you and your house hotter. Even young and healthy people can get sick from the heat if they participate in strenuous physical activities during hot weather: - Limit outdoor activity, especially midday when the sun is hottest. - Wear and reapply sunscreen as indicated on the package. - Pace activity. Start activities slow and pick up the pace gradually. - Drink more water than usual and don’t wait until you’re thirsty to drink more. Muscle cramping may be an early sign of heat-related illness. - Wear loose, lightweight, light-colored clothing. If you play a sport that practices during hot weather protect yourself and look out for your teammates: - Schedule workouts and practices earlier or later in the day when the temperature is cooler. - Monitor a teammate’s condition, and have someone do the same for you. - Seek medical care immediately if you or a teammate has symptoms of heat-related illness. - Learn more about how to protect young athletes from heat-related illness by taking this CDC course. Drink plenty of fluids to prevent heat-related illnesses. Everyone should take these steps to prevent heat-related illnesses, injuries, and deaths during hot weather: - Stay in an air-conditioned indoor location as much as possible. - Drink plenty of fluids even if you don’t feel thirsty. - Schedule outdoor activities carefully. - Wear loose, lightweight, light-colored clothing and sunscreen. - Pace yourself. - Take cool showers or baths to cool down. - Check on a friend or neighbor and have someone do the same for you. - Never leave children or pets in cars. - Check the local news for health and safety updates. Never Leave Children Unattended in a Car! Los Angeles, Santa Clara, and Sonoma Counties have awareness campaigns against child neglect with the consequences of leaving children unattended in vehicles during hot summer months. One of the projects is “Not Even for A Minute” Campaign which highlights health risks and criminal consequences. Learn More THE #1 KILLER OF TEENS ARE TEEN DRIVERS! JourneySafe is an outreach program established by Dr. David & Donna Sabet, parents of Jill Sabet. Jill and her boyfriend, Jonathan Schulte (photo) were two remarkable teens who lost their lives May 26, 2005 in a senseless single vehicle automobile crash. Learn More One of Fresno Madera Medical Society Alliance’s health projects is “Stroke Happens" Stroke kills twice as many women as breast cancer” was the shocking statistic Alliance members heard in September 2008 when a local neurologist spoke to them about the warning signs of a stroke and the need to call 911 immediately. Learn More “ICE” YOUR PHONE There are over 215 million cell phone users in the United States today. Industry experts expect over 300 million users by 2010. The U.S. Centers for Disease Control and Prevention reported in 2006 that 1,600,000 emergency room patients could not provide contact information because they were incapacitated. So many individuals, including teenagers, leave the home each day without any identification or emergency contact information, yet carry a cell phone. A global campaign, started in the UK in 2005, has spread to the United States calling for individuals to program an In Case of Emergency contact (or ICE for short) into their mobile phones. ICESticker.com is a national coalition member of Ready.Gov., a Homeland Security program aimed at encouraging Americans to take responsibility at preparing themselves for an emergency or major disaster. ICESticker.com has developed an iconic self-adhesive visual alert to be applied to the back of the phone to serve as both as an alert for and invitation to paramedics and emergency personnel that the individual has established an emergency communication protocol. Since launching in the summer of 2005, ICESticker.com has distributed hundreds of thousands of the original ICE Sticker™ visual alerts to a world-wide base of emergency responders, community organizers, government entities, private companies, and individuals just like you. Get involved and become part of the ICE Your Phone™ campaign today. Click the links below for more information ACEP-Emergency Physicians say ICE can help save your life Fresno, CA- Fire Department Press Release – January 2010 - Need PDF Link. Click here to order ICE stickers HOW TO “ICE” YOUR PHONE Type in “ICE”, then the contact name (for example, ICE Mom). If possible, list more than one ICE contact in case the first cannot be reached. Make sure your ICE contact is familiar with your medical history. DO NOT password-protect your contact list. - ICE is not a substitute to keeping written emergency information in a wallet or purse. Emergency response teams first look to identify you before trying to contact next of kin. - Cell phones are personal items that must remain with the victim. Written information can be photocopied. Keep ICE information limited – as this is accessible to anyone finding your cell phone. - The person whose name and number you are giving has agreed to be your ICE contact. - Your ICE contact(s) should have a list of people they should contact on your behalf, including your place of work. - Your ICE contact should know about any medical conditions that could affect your emergency treatment – for example allergies or current medications. - If you are under 18, your ICE contact is either your mother or father or an immediate member of your family authorized to make decisions on your behalf
Through 15 minutes of irony and humour – guide your students to learn the importance of rounding off decimals. You will adopt the opposite, belligerent insistence on ABSOLUTE precision. The ridiculousness of the request soon becomes apparent. A downloadable pdf for classroom projection is here. PS. Students will often ask a clarifying question “Is the height the distance from head to tail?” to which you respond – “no – it is the height without the dragon standing on tippy-toes – or standing on its tail ;-)” Games give you a chance to excel, and if you’re playing in good company you don’t even mind if you lose because you had the enjoyment of the company during the course of the game.Gary Gygax Standards for Mathematical Practice MathPickle puzzle and game designs engage a wide spectrum of student abilities while targeting the following Standards for Mathematical Practice: MP1 Toughen up! This is problem solving where our students develop grit and resiliency in the face of nasty, thorny problems. It is the most sought after skill for our students. MP3 Work together! This is collaborative problem solving in which students discuss their strategies to solve a problem and identify missteps in a failed solution. MathPickle recommends pairing up students for all its puzzles. MP6 Be precise! This is where our students learn to communicate using precise terminology. MathPickle encourages students not only to use the precise terms of others, but to invent and rigorously define their own terms. MP7 Be observant! One of the things that the human brain does very well is identify pattern. We sometimes do this too well and identify patterns that don't really exist.
The Prenatal Profile is a Maternal Blood Screen that typically includes: Blood type, Rh factor, and antibody screening It is important to know your blood type in pregnancy. Blood type is based on particular molecules that sit on the surface of red blood cells. People either have A antigens (type A blood), B antigens (type B), both (type AB) or neither (type O) on their red blood cells. When it comes to Rh factor, some people have the antigen (Rh-positive) and some people don't (Rh-negative.) In other words, your blood type identifies which antigens you have from each group. If you are Rh negative, we will also order an antibody screen and discuss options for prevention of Rh sensitization. [More information on Rh factor and issues for Rh negative mothers.] Complete Blood Count A complete blood count (CBC) gives important information about the kinds and numbers of cells in the blood, especially red blood cells, white blood cells, and platelets. A CBC test usually includes: - White blood cell (WBC, leukocyte) count. White blood cells protect the body against infection. If an infection develops, white blood cells attack and destroy the bacteria, virus, or other organism causing it. White blood cells are bigger than red blood cells but fewer in number. When a person has a bacterial infection, the number of white cells rises very quickly. - Red blood cell (RBC) count. Red blood cells carry oxygen from the lungs to the rest of the body. They also carry carbon dioxide back to the lungs so it can be exhaled. If the RBC count is low (anemia), the body may not be getting the oxygen it needs. If the count is too high (a condition called polycythemia), there is a chance that the red blood cells will clump together and block tiny blood vessels (capillaries). This also makes it hard for your red blood cells to carry oxygen. - Hematocrit (HCT, packed cell volume, PCV). This test measures the amount of space (volume) red blood cells take up in the blood. The value is given as a percentage of red blood cells in a volume of blood. For example, a hematocrit of 38 means that 38% of the blood's volume is made of red blood cells. Hematocrit and hemoglobin values are the two major tests that show if anemia or polycythemia is present. - Hemoglobin (Hgb). The hemoglobin molecule fills up the red blood cells. It carries oxygen and gives the blood cell its red color. The hemoglobin test measures the amount of hemoglobin in blood and is a good measure of the blood's ability to carry oxygen throughout the body. - Platelet (thrombocyte) count. Platelets (thrombocytes) are the smallest type of blood cell. They are important in blood clotting Rubella (German measles) immunity This test, called a rubella titer, checks the level of antibodies to the rubella virus in your blood to see whether you're immune. Most women are immune to rubella, either because they've been vaccinated or had the disease as a child. Hepatitis B testing Some women with this liver disease have no symptoms and can unknowingly pass it to their baby during labor or after birth. This test will reveal whether you're a hepatitis B carrier. This sexually transmitted infection (STI) is relatively rare today, but all women should be tested because if you have syphilis and don't treat it, both you and your baby could develop serious problems. In the unlikely event that you test positive, you'll be given antibiotics to treat the infection. The Centers for Disease Control and Prevention and the Pennsylvania and New Jersey Departments of Health recommend that all pregnant women be tested for the human immunodeficiency virus (HIV), the virus that causes AIDS. OTHER BLOOD TESTS There are other blood tests that are offered in addition to the prenatal profile. There are some tests related to genetic screening and others that might be specific to certain conditions or situations. Toxoplasmosis- Toxoplasmosis is an infection that has few symptoms for an adult, but can cause serious illness for a fetus. Many adults have been exposed to the parasite that causes the disease, and have developed immunity to it. But if you are not immune, and get your first bout of toxoplasmosis while pregnant, your child could be affected. "Toxo" can be gotten from raw meat, and from cat and kitten feces. So, if you have cats and handle their litter box, you might want to consider this blood test to make sure you are immune to toxo. Urinalysis and Urine Culture A urine screen is used to assess bladder or kidney infections, diabetes, dehydration and preeclampsia by screening for high levels of sugars, proteins, ketones and bacteria. Repeated findings of sugar in the urine my necessitate dietary changes to help maintain normal blood sugar levels throughout the day. Higher levels of protein may suggest a possible urinary tract infection, or kidney disease. Preeclampsia may be a concern if higher levels of protein are found later in pregnancy, combined with high blood pressure. This screen is normally performed in our office at each prenatal visit. We will do a Urine Culture with your initial bloodwork to make sure you do not have an asymptomatic urinary tract infection (more common in pregnancy) nor Group B Strep bacteria in your urine. STI cultures The Pennsylvania and New Jersey Departments of Health also recommends screening for Gonorrhea and Chlamydia, sexually transmitted bacterial infections (STIs). Screening requires a speculum exam in order to swab the cervix.
Inside the facial bones of the skull are small, hollow chambers called sinuses. Our sinuses consist of four groups. The maxillary sinuses are located in the cheekbones, along the sides of the nose are the ethmoid sinuses, behind the ethmoid sinuses are the sphenoid sinuses and above the eyes are the frontal sinuses. Healthy sinuses are filled with air and serve to reduce the weight of the head, provide structure, assist in the voice resonation and drain mucus from the nose. However, when the sinuses become blocked, they can cause significant pain, swelling and lead to recurrent or chronic infections. Sinuses can become blocked when infections, debris, or allergies cause the mucous lining of the sinuses to swell. All the swelling can cause the sinuses to fill with fluid causing the pressure inside them to drop. Any or all of the sinuses may be affected by allergies or infections causing significant pain and discomfort in the cheeks, behind the eyes, forehead and above the teeth. The medical term for a sinus infection is sinusitis or rhinosinusitis. Infections are usually caused by viruses but occasionally can be caused by bacteria or even fungal organisms. Symptoms of acute infections include green or yellow mucous from the nose, sharp pains, headaches and fever. Sinus infections that last longer than two months are considered chronic. Chronic sinusitis symptoms may be more subtle than those of acute infections and often include head congestion, drainage down the back of the throat, fatigue, and reduced sense of taste and smell. Sinusitis is often diagnosed with a thorough history and physical exam by a medical provider. Certain imaging tests and endoscopic evaluations can be helpful when the diagnosis is not straightforward or if other conditions need to be ruled out. Once diagnosed, sinus infections may be treated with antibiotics and/or nasal sprays that include corticosteroids or an antihistamine. Sinus surgery or balloon sinuplasty, an in-office procedure that improves sinus drainage are often considered when first line therapies are not effective. Disorders of the nasal passageways can also cause blockage and unwanted symptoms of pain and congestion. Some of the common causes of nasal blockage include: - Deviated septum – The nasal septum which divides the two nasal passageways may be deviated or off center due to trauma or poor development after birth. Significant deviation can lead to breathing difficulties, chronic congestion, frequent sinus infections, and nosebleeds. Surgery can correct problems caused by a deviated septum. - Perforated septum – Trauma to the septum can also cause it to be punctured leading to frequent nosebleeds, chronic nasal discharge, and breathing difficulties. Skin grafts or plastic membranes can surgically repair a perforated septum. - Nasal polyps – Abnormal growths of tissue called polyps can cause nasal blockage by obstructing or partially obstructing the nasal passageways, causing congestion, breathing difficulties and contributing to infections. Removal of the polyps are recommended not only for symptom relief but to make sure the polyps are not cancerous.
Gaining insights into cancer control from rodents Breakthroughs in our understanding of cancer are often found in unexpected places. For example, a long-living, cancer-resistant rodent called the naked mole rat may hold clues to new cancer prevention and treatment strategies. The reason that naked mole rats seem to be virtually cancer-proof may stem from the high levels of a jelly-like substance called hyaluronan in their bodies. The large hyaluronan molecules surround cells and may protect them from becoming cancerous. Hyaluronan is found in all animals, including humans. It helps to lubricate joints and is an essential component in skin and cartilage. Dr Barbara Triggs-Raine, a biochemist and professor at the University of Manitoba, is one of the few researchers worldwide that are focused on the role of enzymes that break down hyaluronan in cancer and other genetic disorders. With the support of the Canadian Cancer Society, Dr Triggs-Raine is studying whether interfering with an enzyme that breaks down hyaluronan called HYAL2 could be an effective strategy to raise hyaluronan levels to prevent or treat cancer. It is equally important to identify any unintended consequences or side effects that could be expected if HYAL2 is blocked. To address these questions, Dr Triggs-Raine and her team are studying the effects of removing HYAL2 in mice. Their experiments should allow them to see whether this leads to increases in hyaluronan levels in various tissues and whether this can reduce the severity of skin cancers. This research could establish HYAL2 as a promising therapeutic target in cancer, paving the way for the development of HYAL2 blockers to prevent cancer in high-risk individuals and to slow the growth of cancer if it occurs. This innovative idea is Dr Triggs-Raine’s first project dedicated to cancer research, exemplifying how the Society funds the best scientists in multidisciplinary fields to drive progress in achieving our mission.
Proof is In The Deployment (Prediction) This week the first year Academy students put their knowledge to the test. One of the key elements of the modeling pedagogy is that students are given a chance to test their predictive powers using the model that they have built. This stage of the modeling cycle is called deployment. When a model is deployed, the students describe, represent and most importantly predict the behavior of a situation they have not previously encountered. In this case, the model the students were deploying is the constant velocity particle model. This analytical model describes, represents and predicts the behavior of a particle moving at a constant velocity. For the past few weeks, students have been building the model, informed through experimentation/observation and some guidance from staff. In this deployment activity, students were asked to predict the location where two constant velocity buggies would collide. The two buggies had different velocities and were separated a distance of 1.2 meters. The students were asked to describe, represent and quantitatively determine the position where the two buggies would collide. The students seemed to be satisfied with the results, and so next stop…the constant acceleration particle model.
This activity describes how to teach your child to wash vegetables properly. - 5 to 10 minutes - Vegetables such as carrots or potatoes - Scrub brush - Scoot your child up to the sink on a chair or step stool and demonstrate how to use the brush to clean the vegetables. - Talk about how vegetables grow and why we wash them. - Cook, clean, and eat vegetables together. - Visit a garden or grow some root vegetables (carrots or radishes are quick and easy) that can be pulled out of the ground, dirt and all. - Take a few vegetables into the bathtub for cleaning during bath time.
When a user wants to execute a group of statements depending upon a value of an expression, then he can use Select Case statements. Each value is called a Case, and the variable being switched ON based on each case. Case Else statement is executed if test expression doesn't match any of the Case specified by the user. Case Else is an optional statement within Select Case, however, it is a good programming practice to always have a Case Else statement. The syntax of a Select Statement in VBScript is − Select Case expression Case expressionlist1 statement1 statement2 .... .... statement1n Case expressionlist2 statement1 statement2 .... .... Case expressionlistn statement1 statement2 .... .... Case Else elsestatement1 elsestatement2 .... .... End Select <!DOCTYPE html> <html> <body> <script language = "vbscript" type = "text/vbscript"> Dim MyVar MyVar = 1 Select case MyVar case 1 Document.write "The Number is the Least Composite Number" case 2 Document.write "The Number is the only Even Prime Number" case 3 Document.write "The Number is the Least Odd Prime Number" case else Document.write "Unknown Number" End select </script> </body> </html> In the above example, the value of MyVar is 1. Hence, Case 1 would be executed. The Number is the Least Composite Number
Stamp prints on paper using wood or metal carvings. What does a Printmaker do? A Printmaker creates art by using tools to impress ink, textures, and paint onto paper. This form of art embraces the deep cultural traditions from which it arose, and there are several different techniques that have come to define the craft. These techniques—perhaps you have heard of the terms etchings, woodcutting, or relief printing—are different categories for how Printmakers put texture and color onto an object to create a tool. The creation of this tool is the core of your artistic process. In woodcut, for example, Printmaker’s carve and paint a piece of wood. In etching on the other hand, you burn lines into a piece of metal with acid. After it is carved and colored, you impress the tool onto a piece of paper. This creates a print, which is your final artistic product. It’s like a stamp that you press on an ink pad and then on a piece of paper, and you are making that stamp. One of the characteristics of printmaking is that you can reuse your tool to make many additional prints. And multiple prints from the same tool are not considered replicas—each print is an individual work of art, and they are considered different editions (or “impressions”) of a single series.
A biofilm is any group of cells stick to each other on a surface. These adherent cells are frequently embedded within a self-produced matrix of extracellular polymeric substance (EPS). Biofilm extracellular polymeric substance, which is also referred to as slime (although not everything described as slime is a biofilm), is a polymeric conglomeration generally composed of extracellular DNA, proteins, and polysaccharides. Biofilms may form on living or non-living surfaces and can be prevalent in natural, industrial and hospital settings. The microbial cells growing in a biofilm are physiologically distinct from planktonic cells of the same organism, which, by contrast, are single-cells that may float or swim in a liquid medium. Microbes form a biofilm in response to many factors, which may include cellular recognition of specific or non-specific attachment sites on a surface, nutritional cues, or in some cases, by exposure of planktonic cells to sub-inhibitory concentrations of antibiotics. When a cell switches to the biofilm mode of growth, it undergoes a phenotypic shift in behavior in which large suites of genes are differentially regulated. - Formation 1 - Development 2 - Dispersal 3 - Extracellular matrix 4.1 - Where do biofilms form? 5 - Taxonomic diversity 6 Biofilms and infectious diseases 7 - Dental plaque 7.1 - Streptococcus pneumoniae 7.2 - Legionellosis 7.3 - See also 8 - References 9 - Further reading 10 - External links 11 Formation of a biofilm begins with the attachment of free-floating microorganisms to a surface. These first colonists adhere to the surface initially through weak, reversible adhesion via van der Waals forces. If the colonists are not immediately separated from the surface, they can anchor themselves more permanently using cell adhesion structures such as pili. Hydrophobicity also plays an important role in determining the ability of bacteria to form biofilms, as those with increased hydrophobicity have reduced repulsion between the extracellular matrix and the bacterium. Some species are not able to attach to a surface on their own but are sometimes able to anchor themselves to the matrix or directly to earlier colonists. It is during this colonization that the cells are able to communicate via quorum sensing using products such as AHL. Some bacteria are unable to form biofilms as successfully due to their limited motility. Nonmotile bacteria cannot recognize the surface or aggregate together as easily as motile bacteria. Once colonization has begun, the biofilm grows through a combination of cell division and recruitment. Polysaccharide matrices typically enclose bacterial biofilms. In addition to the polysaccharides, these matrices may also contain material from the surrounding environment, including but not limited to minerals, soil particles, and blood components, such as erythrocytes and fibrin. The final stage of biofilm formation is known as dispersion, and is the stage in which the biofilm is established and may only change in shape and size. The development of a biofilm may allow for an aggregate cell colony (or colonies) to be increasingly antibiotic resistant. Cell-cell communication or quorum sensing (QS) has been shown to be involved in the formation of biofilm in several bacterial species. There are five stages of biofilm development (see illustration at right): - Initial attachment: - Irreversible attachment: - Maturation I: - Maturation II: Dispersal of cells from the biofilm colony is an essential stage of the biofilm life cycle. Dispersal enables biofilms to spread and colonize new surfaces. Enzymes that degrade the biofilm extracellular matrix, such as dispersin B and deoxyribonuclease, may play a role in biofilm dispersal. Biofilm matrix degrading enzymes may be useful as anti-biofilm agents. Recent evidence has shown that a fatty acid messenger, cis-2-decenoic acid, is capable of inducing dispersion and inhibiting growth of biofilm colonies. Secreted by Pseudomonas aeruginosa, this compound induces cyclo heteromorphic cells in several species of bacteria and the yeast Candida albicans. Nitric oxide has also been shown to trigger the dispersal of biofilms of several bacteria species at sub-toxic concentrations. Nitric oxide has the potential for the treatment of patients that suffer from chronic infections caused by biofilms. Biofilms are usually found on solid - Biofilms 5 International Conference, 10-12 december, Paris - Biofilm Archive of Biofilm Research & News - Documentary on Biofilms: The Silent Role of Biofilms in Chronic Disease - HD Video Interviews on biofilms, antibiotics, etc. with experts - Ramadan HH, Sanclement JA, Thomas JG (March 2005). "Chronic rhinosinusitis and biofilms". - Bendouah Z, Barbeau J, Hamad WA, Desrosiers M (June 2006). "Biofilm formation by Staphylococcus aureus and Pseudomonas aeruginosa is associated with an unfavorable evolution after surgery for chronic sinusitis and nasal polyposis". - Lynch AS, Robertson GT (2008). "Bacterial and fungal biofilm infections". - Vo P, Nunez M (2010). "Bdellovibrio bacteriovorus Predation in Dual-Species Biofilms of E. coli Prey and M. luteus Decoys". arXiv:1005.3582 [q-bio.PE]. - Allison, D. G. (2000). Community structure and co-operation in biofilms. Cambridge, UK: Cambridge University Press. - Lynch, James F.; Lappin-Scott, Hilary M.; Costerton, J. W. (2003). Microbial biofilms. Cambridge, UK: Cambridge University Press. - Fratamico, M. (2009). Biofilms in the food and beverage industries. Woodhead Publishing Limited. - "Terminology for biorelated polymers and applications (IUPAC Recommendations 2012)". - Hall-Stoodley L, Costerton JW, Stoodley P (February 2004). "Bacterial biofilms: from the natural environment to infectious diseases". - Lear, G; Lewis, GD (editor) (2012). Microbial Biofilms: Current Research and Applications. - Karatan E, Watnick P (June 2009). "Signals, regulatory networks, and materials that build and break bacterial biofilms". - Hoffman LR, D'Argenio DA, MacCoss MJ, Zhang Z, Jones RA, Miller SI (August 2005). "Aminoglycoside antibiotics induce bacterial biofilm formation". Nature 436 (7054): 1171–5. (primary source) - An D, Parsek MR (June 2007). "The promise and peril of transcriptional profiling in biofilm communities". - Donlan, Rodney M. 2002. Biofilms: Microbial Life on Surfaces. Emerging Infectious Diseases. Vol. 8, No. 9: pg. 881-890. - Quorum-Sensing Regulation of the Biofilm Matrix Genes (pel) of Pseudomonas aeruginosa - Kaplan JB, Ragunath C, Ramasubbu N, Fine DH (August 2003). "Detachment of Actinobacillus actinomycetemcomitans biofilm cells by an endogenous beta-hexosaminidase activity". Journal of Bacteriology 185 (16): 4693–8. - Izano EA, Amarante MA, Kher WB, Kaplan JB (January 2008). "Differential roles of poly-N-acetylglucosamine surface polysaccharide and extracellular DNA in Staphylococcus aureus and Staphylococcus epidermidis biofilms". Applied and Environmental Microbiology 74 (2): 470–6. - Kaplan JB, Ragunath C, Velliyagounder K, Fine DH, Ramasubbu N (July 2004). "Enzymatic detachment of Staphylococcus epidermidis biofilms". Antimicrobial Agents and Chemotherapy 48 (7): 2633–6. - Xavier JB, Picioreanu C, Rani SA, van Loosdrecht MC, Stewart PS (December 2005). "Biofilm-control strategies based on enzymic disruption of the extracellular polymeric substance matrix--a modelling study". Microbiology 151 (Pt 12): 3817–32. - Davies DG, Marques CN (March 2009). "A fatty acid messenger is responsible for inducing dispersion in microbial biofilms". Journal of Bacteriology 191 (5): 1393–403. - Barraud N, Hassett DJ, Hwang SH, Rice SA, Kjelleberg S, Webb JS (2006). "Involvement of nitric oxide in biofilm dispersal of Pseudomonas aeruginosa". Journal of Bacteriology 188: 7344–7353. - Barraud N, Storey MV, Moore ZP, Webb JS, Rice SA, Kjelleberg S (2009). "Nitric oxide-mediated dispersal in single- and multi-species biofilms of clinically and industrially relevant microorganisms". Microbial Biotechnology 2: 370–378. - "Dispersal of Biofilm in Cystic Fibrosis using Low Dose Nitric Oxide". University of Southampton. Retrieved 20 January 2012. - Nadell, Carey D.; Xavier, Joao B.; Foster, Kevin R. (1 January 2009). "The sociobiology of biofilms". FEMS Microbiology Reviews 33 (1): 206–224. - Stoodley, Paul; Dirk deBeer andZbigniew Lewandowski (August 1994). "Liquid Flow in Biofilm Systems". Appl Environ Microbiol. 60 (8): 2711–2716. - Stewart PS, Costerton JW (July 2001). "Antibiotic resistance of bacteria in biofilms". Lancet 358 (9276): 135–8. - Molin S, Tolker-Nielsen T (June 2003). "Gene transfer occurs with enhanced efficiency in biofilms and induces enhanced stabilisation of the biofilm structure". Current Opinion in Biotechnology 14 (3): 255–61. - Jakubovics NS, Shields RC, Rajarajan N, Burgess JG (December 2013). "Life after death: the critical role of extracellular DNA in microbial biofilms". Lett. Appl. Microbiol. 57 (6): 467–75. - Spoering AL, Lewis K (December 2001). "Biofilms and planktonic cells of Pseudomonas aeruginosa have similar resistance to killing by antimicrobials". Journal of Bacteriology 183 (23): 6746–51. - Characklis, WG; Nevimons, MJ; Picologlou, BF (1981). "Influence of Fouling Biofilms on Heat Transfer". Heat Transfer Engineering 3: 23. - Schwermer CU, Lavik G, Abed RM, et al. (May 2008). "Impact of nitrate on the structure and function of bacterial biofilm communities in pipelines used for injection of seawater into oil fields". Applied and Environmental Microbiology 74 (9): 2841–51. - Martins dos Santos VAP, Yakimov MM, Timmis KN, Golyshin PN (2008). "Genomic Insights into Oil Biodegradation in Marine Systems". In Díaz E. Microbial Biodegradation: Genomics and Molecular Biology. Horizon Scientific Press. p. 1971. - "Introduction to Biofilms: Desirable and undesirable impacts of biofilm". (primary source) - Andersen PC, Brodbeck BV, Oden S, Shriner A, Leite B (September 2007). "Influence of xylem fluid chemistry on planktonic growth, biofilm formation and aggregation of Xylella fastidiosa". FEMS Microbiology Letters 274 (2): 210–7. - Bollinger, Randal; Barbas, Andrew; Bush, Errol; Lin, Shu; Parker, William (24 June 2007). "Biofilms in the large bowel suggest an apparent function of the human vermiform appendix". The Journal of Theoretical Biology 249 (4): 826–831. - Abee, T; Kovács, A. T.; Kuipers, O. P.; Van Der Veen, S (2011). "Biofilm formation and dispersal in Gram-positive bacteria". Current Opinion in Biotechnology 22 (2): 172–9. - Danhorn, T; Fuqua, C (2007). "Biofilm formation by plant-associated bacteria". Annual Review of Microbiology 61: 401–22. - "Research on microbial biofilms (PA-03-047)". NIH, National Heart, Lung, and Blood Institute. 2002-12-20. - Rogers A H (2008). Molecular Oral Microbiology. Caister Academic Press. pp. 65–108. - Imamura Y, Chandra J, Mukherjee PK, et al. (January 2008). "Fusarium and Candida albicans biofilms on soft contact lenses: model development, influence of lens type, and susceptibility to lens care solutions". Antimicrobial Agents and Chemotherapy 52 (1): 171–82. - Lewis K (April 2001). "Riddle of biofilm resistance". Antimicrobial Agents and Chemotherapy 45 (4): 999–1007. - Parsek MR, Singh PK (2003). "Bacterial biofilms: an emerging link to disease pathogenesis". Annual Review of Microbiology 57: 677–701. - Davis SC, Ricotti C, Cazzaniga A, Welsh E, Eaglstein WH, Mertz PM (2008). "Microscopic and physiologic evidence for biofilm-associated wound colonization in vivo". Wound Repair and Regeneration 16 (1): 23–9. - Sanclement J, Webster P, Thomas J, Ramadan H (2005). "Bacterial biofilms in surgical specimens of patients with chronic rhinosinusitis". Laryngoscope 115 (4): 578–82. - Sanderson AR, Leid JG, Hunsaker D (July 2006). "Bacterial biofilms on the sinus mucosa of human subjects with chronic rhinosinusitis". The Laryngoscope 116 (7): 1121–6. - Auler ME, Morreira D, Rodrigues FF, et al. (April 2009). "Biofilm formation on intrauterine devices in patients with recurrent vulvovaginal candidiasis". Medical Mycology: 1–6. - Leevy WM, Gammon ST, Jiang H, et al. (December 2006). "Optical imaging of bacterial infection in living mice using a fluorescent near-infrared molecular probe". Journal of the American Chemical Society 128 (51): 16476–7. - Kaplan JB, Izano EA, Gopal P, et al. (2012). "Low Levels of β-Lactam Antibiotics Induce Extracellular DNA Release and Biofilm Formation in Staphylococcus aureus". mBio 3 (4). - Augustin Mihai, Carmen Balotescu-Chifiriuc, Veronica Lazăr, Ruxandra Stănescu, Mihai Burlibașa, Dana Catrinel Ispas (Dec 2010). "Microbial biofilms in dental medicine in reference to implanto-prostethic rehabilitation". Revista de chirurgie oro-maxilo-facială și implantologie (in Română) 1 (1): 9–13. (webpage has a translation button) - Marquis RE (September 1995). "Oxygen metabolism, oxidative stress and acid-base physiology of dental plaque biofilms". J. Ind. Microbiol. 15 (3): 198–207. - Lemos JA, Abranches J, Burne RA (January 2005). "Responses of cariogenic streptococci to environmental stresses". Curr Issues Mol Biol 7 (1): 95–107. - TAMM C, HODES ME, CHARGAFF E (March 1952). "The formation apurinic acid from the desoxyribonucleic acid of calf thymus". J. Biol. Chem. 195 (1): 49–63. - FREESE EB (April 1961). "Transitions and transversions induced by depurinating agents". Proc. Natl. Acad. Sci. U.S.A. 47: 540–5. - Li YH, Lau PC, Lee JH, Ellen RP, Cvitkovitch DG (February 2001). "Natural genetic transformation of Streptococcus mutans growing in biofilms". J. Bacteriol. 183 (3): 897–908. - Senadheera D, Cvitkovitch DG (2008). "Quorum sensing and biofilm formation by Streptococcus mutans". Adv. Exp. Med. Biol. 631: 178–88. - Michod RE, Bernstein H, Nedelcu AM (May 2008). "Adaptive value of sex in microbial pathogens". Infect. Genet. Evol. 8 (3): 267–85. http://www.hummingbirds.arizona.edu/Faculty/Michod/Downloads/IGE%20review%20sex.pdf - Oggioni MR, Trappetti C, Kadioglu A, Cassone M, Iannelli F, Ricci S, Andrew PW, Pozzi G (September 2006). "Switch from planktonic to sessile life: a major event in pneumococcal pathogenesis". Mol. Microbiol. 61 (5): 1196–210. - Wei H, Håvarstein LS (August 2012). "Fratricide is essential for efficient gene transfer between pneumococci in biofilms". Appl. Environ. Microbiol. 78 (16): 5897–905. - Murga R, Forster TS, Brown E, Pruckler JM, Fields BS, Donlan RM (November 2001). "Role of biofilms in the survival of Legionella pneumophila in a model potable-water system". Microbiology 147 (Pt 11): 3121–6. - Bacterial nanowires - Chemistry of biofilm prevention - Microbial mat - Phage therapy - Phototrophic biofilms Legionella bacteria are known to grow under certain conditions in biofilms, in which they are protected against disinfectants. Workers in cooling towers, persons working in air conditioned rooms and people taking a shower are exposed to Legionella by inhalation when the systems are not well designed, constructed, or maintained. It has been proposed that competence development and biofilm formation is an adaptation of S. pneumoniae to survive the defenses of the host. In particular, the host’s polymorphonuclear leukocytes produce an oxidative burst to defend against the invading bacteria, and this response can kill bacteria by damaging their DNA. Competent S. pneumoniae in a biofilm have the survival advantage that they can more easily take up transforming DNA from nearby cells in the biofilm to use for recombinational repair of oxidative damages in their DNA. Competent S. pneumoniae can also secrete an enzyme (murein hydrolase) that destroys non-competent cells (fratricide) causing DNA to be released into the surrounding medium for potential use by the competent cells. S. pneumoniae is the main cause of community-acquired pneumonia and meningitis in children and the elderly, and of septicemia in HIV-infected persons. When S. pneumonia grows in biofilms, genes are specifically expressed that respond to oxidative stress and induce competence. Formation of a biofilm depends on competence stimulating peptide (CSP). CSP also functions as a quorum-sensing peptide. It not only induces biofilm formation, but also increases virulence in pneumonia and meningitis. When the biofilm, containing S. mutans and related oral streptococci, is subjected to acid stress, the competence regulon is induced, leading to resistance to being killed by acid. As pointed out by Michod et al., transformation in bacterial pathogens likely provides for effective and efficient recombinational repair of DNA damages. It appears that S. mutans can survive the frequent acid stress in oral biofilms, in part, through the recombinational repair provided by competence and transformation. A peptide pheromone quorum sensing signaling system in S. mutans includes the Competence Stimulating Peptide (CSP) that controls genetic competence. Genetic competence is the ability of a cell to take up DNA released by another cell. Competence can lead to genetic transformation, a form of sexual interaction, favored under conditions of high cell density and/or stress where there is maximal opportunity for interaction between the competent cell and the DNA released from nearby donor cells. This system is optimally expressed when S. mutans cells reside in an actively growing biofilm. Biofilm grown S. mutans cells are genetically transformed at a rate 10- to 600-fold higher than S. mutans growing as free-floating planktonic cells suspended in liquid. The biofilm on the surface of teeth is frequently subject to oxidative stress and acid stress. Dietary carbohydrates can cause a dramatic decrease in pH in oral biofilms to values of 4 and below (acid stress). A pH of 4 at body temperature of 37 °C causes depurination of DNA, leaving apurinic (AP) sites in DNA, especially loss of guanine. metabolites which results in dental disease. Research has shown that sub-therapeutic levels of β-lactam antibiotics induce biofilm formation in Staphylococcus aureus. This sub-therapeutic level of antibiotic may result from the use of antibiotics as growth promoters in agriculture, or during the normal course of antibiotic therapy. The biofilm formation induced by low-level methicillin was inhibited by DNase, suggesting that the sub-therapeutic levels of antibiotic also induce extracellular DNA release. New staining techniques are being developed to differentiate bacterial cells growing in living animals, e.g. from tissues with allergy-inflammations. Biofilms can also be formed on the inert surfaces of implanted devices such as catheters, prosthetic cardiac valves and intrauterine devices. It has recently been shown that biofilms are present on the removed tissue of 80% of patients undergoing surgery for chronic sinusitis. The patients with biofilms were shown to have been denuded of cilia and goblet cells, unlike the controls without biofilms who had normal cilia and goblet cell morphology. Biofilms were also found on samples from two of 10 healthy controls mentioned. The species of bacteria from interoperative cultures did not correspond to the bacteria species in the biofilm on the respective patient's tissue. In other words, the cultures were negative though the bacteria were present. Biofilms have been found to be involved in a wide variety of microbial infections in the body, by one estimate 80% of all infections. Infectious processes in which biofilms have been implicated include common problems such as urinary tract infections, catheter infections, middle-ear infections, formation of dental plaque, gingivitis, coating contact lenses, and less common but more lethal processes such as endocarditis, infections in cystic fibrosis, and infections of permanent indwelling devices such as joint prostheses and heart valves. More recently it has been noted that bacterial biofilms may impair cutaneous wound healing and reduce topical antibacterial efficiency in healing or treating infected skin wounds. Biofilms and infectious diseases For other species in disease-associated biofilms see below. Biofilms are formed by bacteria that colonize plants, e.g. Pseudomonas putida, Pseudomonas fluorescens, and related pseudomonads which are common plant-associated bacteria found on leaves, roots, and in the soil, and the majority of their natural isolates form biofilms. Several nitrogen-fixing symbionts of legumes such as Rhizobium leguminosarum and Sinorhizobium meliloti form biofilms on legume roots and other inert surfaces. Many different bacteria form biofilms, including gram-positive (e.g. Bacillus spp, Listeria monocytogenes, Staphylococcus spp, and lactic acid bacteria, including Lactobacillus plantarum and Lactococcus lactis) and gram-negative species (e.g. Escherichia coli, or Pseudomonas aeruginosa). - Biofilms can be found on rocks and pebbles at the bottom of most streams or rivers and often form on the surface of stagnant pools of water. In fact, biofilms are important components of food chains in rivers and streams and are grazed by the aquatic invertebrates upon which many fish feed. - Biofilms can grow in the most extreme environments: from, for example, the extremely hot, briny waters of hot springs ranging from very acidic to very alkaline, to frozen glaciers. - In the human environment, biofilms can grow in showers very easily since they provide a moist and warm environment for the biofilm to thrive. Biofilms can form inside water and sewage pipes and cause clogging and corrosion. Biofilms on floors and counters can make sanitation difficult in food preparation areas. - Biofilms in cooling- or heating-water systems are known to reduce heat transfer. - Biofilms in marine engineering systems, such as pipelines of the offshore oil and gas industry, can lead to substantial corrosion problems. Corrosion is mainly due to abiotic factors; however, at least 20% of corrosion is caused by microorganisms that are attached to the metal subsurface (i.e., microbially influenced corrosion). - Bacterial adhesion to boat hulls serves as the foundation for Slow sand filters rely on biofilm development in the same way to filter surface water from lake, spring or river sources for drinking purposes. What we regard as clean water is effectively a waste material to these microcellular organisms. - Biofilms can help eliminate petroleum oil from contaminated oceans or marine systems. The oil is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a remarkable recently discovered group of specialists, the so-called hydrocarbonoclastic bacteria (HCB). - Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by microbial biofilms, especially of cyanobacteria. Stromatolites include some of the most ancient records of life on Earth, and are still forming today. - Biofilms are present on the teeth of most animals as dental plaque, where they may cause tooth decay and gum disease. - Biofilms are found on the surface of and inside plants. They can either contribute to crop disease or, as in the case of nitrogen-fixing Rhizobium on roots, exist symbiotically with the plant. Examples of crop diseases related to biofilms include Citrus Canker, Pierce's Disease of grapes, and Bacterial Spot of plants such as peppers and tomatoes. - Biofilms are used in - Recent studies in 2003 discovered that the immune system supports bio-film development in the large intestine. This was supported mainly with the fact that the two most abundantly produced molecules by the immune system also support bio-film production and are associated with the bio-films developed in the gut. This is especially important because the appendix holds a mass amount of these bacterial bio-films. This discovery helps to distinguish the possible function of the appendix and the idea that the appendix can help reinoculate the gut with good gut flora. Biofilms are ubiquitous. Nearly every species of microorganism, not only bacteria and archaea, have mechanisms by which they can adhere to surfaces and to each other. Biofilms will form on virtually every non-shedding surface in a non-sterile aqueous (or very humid) environment. Where do biofilms form? However, biofilms are not always less susceptible to antibiotics. For instance, the biofilm form of Pseudomonas aeruginosa has no greater resistance to antimicrobials than do stationary-phase planktonic cells, although when the biofilm is compared to logarithmic phase planktonic cells, the biofilm does have greater resistance to antimicrobials. This resistance to antibiotics in both stationary phase cells and biofilms may be due to the presence of persister cells. Bacteria living in a biofilm usually have significantly different properties from free-floating bacteria of the same species, as the dense and protected environment of the film allows them to cooperate and interact in various ways. One benefit of this environment is increased resistance to detergents and antibiotics, as the dense extracellular matrix and the outer layer of cells protect the interior of the community. In some cases antibiotic resistance can be increased a thousandfold. Lateral gene transfer is greatly facilitated in biofilms and leads to a more stable biofilm structure. Extracellular DNA is a major structural component of many different microbial biofilms. Enzymatic degradation of extracellular DNA can weaken the biofilm structure and release microbial cells from the surface. The biofilm is held together and protected by a matrix of secreted nutrients and signalling molecules. This matrix is strong enough that under certain conditions, biofilms can become fossilized (Stromatolites).
Vibration transducers can be split basically into two types – accelerometers and geophones (or seismometers). Accelerometers have an output proportional to acceleration, and geophones have an output proportional to velocity. So how can both be used to measure vibration? There’s a basic relationship between acceleration and velocity – the former being the rate of change, or the differential, of velocity. Therefore we can easily convert between the two by integrating an acceleration signal to yield a velocity signal. This is normally done in the time domain, using a filter (called an integrator), but it can also be done in the frequency domain by dividing an acceleration spectrum by 2πf, where f is the frequency. This effectively slopes the spectrum by -6dB/octave, so a velocity spectrum will appear to have a lot fewer high frequency components.
Wayfinder navigators always look for signs of weather at sunrise and sunset. This is when they try to predict the weather for the next 12 hours. One of the easiest ways to predict weather is to look at the clouds. There are many different types of clouds in the troposphere (where all weather forms). Different clouds mean different types of weather. Cloud names that describe the shapes of clouds are: - cirrus – meaning curl (as in a lock of hair) or fringe - cumulus – meaning heap or pile - stratus – meaning spread over an area or layer. Nimbus means rain-bearing, and alto means high. The following are some of the more common clouds used to predict weather in three categories – high-level, mid-level and low-level clouds. The bases of these clouds form at about 6200 metres above sea level. They are usually composed of ice crystals. - Cirrus clouds – thin, wispy clouds strewn across the sky in high winds. A few cirrus clouds may indicate fair weather, but increasing cover indicates a change of weather (an approaching warm front) will occur within 24 hours. These are the most abundant of all high-level clouds. - Cirrocumulus – like ripples or fish scales (sometimes called a mackerel sky). When cirrus clouds turn into cirrocumulus, a storm may come – in tropical regions, that could be a hurricane. - Cirrostratus – like thin sheets that spread across the sky and give the sky a pale, whitish, translucent appearance. They often appear 12–24 hours before a rainstorm or snowstorm. The bases of these clouds form at about 2000–6200 m above sea level. They are mostly made of water droplets but can contain ice crystals. The clouds are often seen as bluish-grey sheets that cover most, if not all, of the sky. They can obscure the Sun. - Altocumulus – composed of water droplets and appear as layers of grey, puffy, round, small clouds. Altocumulus clouds on a warm, humid morning may mean thunderstorms late in the afternoon. The bases of these clouds form at altitudes below 2000 m. They are mostly made of drops of water. - Cumulus – known as fair-weather clouds because they usually indicate fair, dry conditions. If there is precipitation, it is light. The clouds have a flattish base with rounded stacks or puffs on top. When the puffs look like cauliflower heads they’re called cumulus congestus or towering cumulus. They can get very high. - Cumulonimbus clouds – thunder clouds that have built up from cumulus clouds. Their bases are often quite dark. These clouds can forecast some of the most extreme weather, including heavy rain, hail, snow, thunderstorms, tornadoes and hurricanes. - Stratus – dull greyish clouds that stretch across and block the sky. They look like fog in the sky. Stratus cover is also called overcast. If their bases reach the ground, they become fog. They can produce drizzle or fine snow. - Stratocumulus – low, puffy and grey, forming rows in the sky. They indicate dry weather if the temperature differences between night and day are slight. Precipitation is rare, but they can turn into nimbostratus clouds. - Nimbostratus – dark grey, wet-looking cloudy layer so thick that it completely blocks out the Sun. They often produce precipitation in the form of rain and/or snow. Precipitation can be long lasting. Wayfinder navigators use clouds to work out where the wind is coming from or if it changes direction (so they can trim their sails accordingly). For example, they might look for cloud roads – puffs of cloud that come up from the far end of the horizon to form a ‘road’ in the sky. Like smoke from a haystack, cloud roads follow the wind. A cloud road indicates the wind is coming from the horizon. If the road is straight, the wind is steady – but if you see the road curve, it means that the wind direction will change. The way the road curves will tell you the new direction. Meteorologists call this kind of phenomenon ‘cloud streets’. Examples of navigator weather talk Navigators realise you can’t predict the weather from a single snapshot – that is, by noting how the sky looks at one moment in time. Instead, you have to observe changes over time. Here are some examples of statements from navigator Nainoa Thompson concerning navigation and the weather (recorded by Sam Low during Hōkūle’a’s voyage from Tahiti to Hawai’i in February 2000): - “The sky where the Sun is rising is clear – there are no smoky clouds caused by strong winds stirring salt into the atmosphere – so the winds will be relatively light today.” - “There’s a change from seeing squalls off the starboard side yesterday to a view of high towering cloud masses but no active squalls. The wind feels stronger than the day before, and I can see wavelets on the surface of the ocean. The wind is coming from the normal direction of SE trade winds. There are low-level cumulus clouds ahead. No indications of squalls – approaching an area of clean-flowing wind from SE, which will be steady. Predict that, in the next 12 hours, the wind will remain steady from the SE at a fairly constant speed, maybe 10 knots, so we will be able to sail north today.” Read the story of the Hōkūle’a and the beginnings of the wayfinding voyages of rediscovery. Explore the site for voyage tracking maps, learning journeys, videos, teaching activities and more related to the art and science of Polynesian voyaging. Nephology, the study of clouds has always been a daydreamer’s science. It was founded by a young student who preferred to stare out the window rather than pay attention in class. This TED-Ed video explains how Luke Howard named and classified clouds. Download a Clouds or wind poster from the MetService.
Technological change refers to the process by which new products and processes are generated. When new technologies involve a new way of making existing products, the technological change is called process innovation. When they include entirely new products, the change is referred to as product innovation. The invention of assembly-line automobile production by the Ford Motor Company is a widely cited example of the former, while automated teller machines (ATMs) and facsimile machines can be seen as product innovations. Broadly speaking, technological change spurs economic growth and general well-being by enabling better utilization of existing resources and by bringing about new and better products. Besides benefits to suppliers or inventors of new technologies via disproportionate profits, new technologies have benefits for consumers (e.g., innovations in health care) and for the society (e.g., better oil-drilling techniques enabling less wastage and a more effective utilization of the oil in the ground). Current technologies also make the development of future technologies easier by generating new ideas and possibilities. Changing technologies, however, can have negative consequences for certain sectors or constituencies. Examples of negative aspects include pollution (including environmental, noise, and light pollution) associated with production processes, increased unemployment from labor-saving new technologies, and so forth. This suggests that society must consider the relative costs and benefits of new technologies. The process of technological change can be seen to have three stages: invention, development, and diffusion. The invention stage involves the conception of a new idea. The idea might be about a new product or about a better technique for making existing products. The invention might be due to a latent demand (e.g., the cure for an existing illness); such inventions are referred to as demand-pull inventions. Inventions can alternately be supply driven, when they are by-products of the pursuit of other inventions. For instance, a number of products, such as the microwave oven, were by-products of the U.S. space program. Yet another possibility is that a new product or process might emerge as an unplanned by-product of the pursuit of another technology (serendipitous invention). In the development stage, the prototype of the invention or the idea is further developed and tested for possible side effects (as with pharmaceutical drugs) and reliability (as with vehicles and airplanes). The invention is also made user-friendly in this stage. The final stage of the innovation process involves making it accessible to most users through market penetration. The benefits of an innovation, both to inventors and to society, are maximized only when the innovation is efficiently diffused. Some innovations are easy to adopt while others involve effort on the part of adopters. For instance, one must learn how to use a computer, a new type of software, or a new type of airplane. Thus, the diffusion of technologies takes time. A useful concept in this regard was provided by Zvi Griliches (1930-1999). Griliches examined the time path of diffusion for hybrid corn seeds. He found that the technology diffused like an S-curve over time, implying that initially diffusion occurred at an accelerated rate, then at a declining rate, and eventually the rate of diffusion tapered off. Various studies have examined the diffusion of other technologies (new airplanes, ATM machines, etc.), and generally the evidence seems to bear out the prevalence of the S-curve of diffusion. There are different avenues of cooperation between the private and public sector in the three stages of innovation. For example, all three stages might take place in the same sector, or there might be cooperation in only some stages (e.g., government agriculture extension services subsidize the diffusion of many farming technologies). Austrian economist Joseph Schumpeter (1883-1950) made significant contributions to the economics of technological change around the middle of the twentieth century. His best-known concept is referred to as the Schumpeterian hypothesis. According to this hypothesis, which linked market structure and innovation, monopolies (due to their large reserves) are perhaps better suited than competitive firms at bringing about new products and processes. This concept called into question the then widely held view that competitive markets were superior in all respects, and provided a redeeming feature of monopolies. Since its inception, the Schumpeterian hypothesis has been a matter of much debate and analysis in the economics literature. The nature of technological change can vary across sector and products and over time. Broadly speaking, economists tend to classify technological change as Hicks-neutral, Harrod-neutral, or labor-saving (see, for example, Sato and Beckmann 1968). Under Hicks-neutral technological change, the rate of substitution of one input for another at the margin (think of substituting capital for one worker) remains unchanged if the factor proportions (i.e., capital-labor ratio) are constant. Harrod-neutral technological change refers to a constant capital-output ratio when the interest rate is unchanged. Finally, labor-saving technological change favors the capital input over labor. Numerous technologies involving increased computerization in recent years are examples of labor-saving technological change. Over time, researchers have conducted studies to test the nature of technological change for various sectors and countries. A number of theories of technological change have been proposed by economists. Some of these theories have evolved over time by refinements of earlier theories, while others have benefited from new revelations. Adam Smith (1723-1790) recognized the role of changing technologies. According to him, improvements in production technology would emerge as a by-product of the division of labor, including the emergence of a profession of schedulers or organizers akin to modern-day engineers. A specialized worker doing the same job repetitively would tend to look for ways to save time and effort. In Smith’s world, productivity could also increase indirectly via capital accumulation. Karl Marx’s (1818-1883) notion of the tendency of the rate of profit to fall stems from a recognition of technological change (process innovation) leading to more efficient production, and the replacement of labor with capital or machinery. Labor-saving innovation or mechanization occurs when Marx’s capitalists are unable to further lengthen the working day and therefore are unable to extract further surplus value in absolute form from labor. Kenneth Arrow introduced the notion that production processes may be refined over time as workers gain greater knowledge from repeat action. Thus, new process technologies might emerge; such change is formally described as emerging from learning-by-doing. The degree of appropriability of research benefits was considered by Arrow to be a strong incentive for firms to engage in research and development. Nathan Rosenberg postulated that the degree of innovation opportunities dictates the research effort that firms put forth. For instance, innovation opportunities expand with new developments in basic science. Richard Nelson and Sidney Winter proposed an alternative theory of technological change. This theory, referred to as the evolutionary theory, argues that technological change evolves over time as newer generations (or improvements) of existing technologies are developed. In other words, the evolutionary theory considers technological change to be less drastic. The process of technological change is uncertain in that there is no guarantee of whether, when, and at what scale the innovation will occur. Four types of uncertainties are generally associated with the process of technological change. One, there is market uncertainty resulting from the lack of information about the winner of the innovation race. For example, of the many pharmaceutical firms pursuing a cure for an illness, none is certain about who will succeed, or when. This uncertainty sometimes results in excessive resources being devoted to the pursuit of a particular innovation as firms try to improve their odds of beating others. Two, there is technological uncertainty regarding a lack of knowledge about research resources sufficient to guarantee success. Will a doubling of the number of scientists employed by a drug company double its odds of inventing a successful cure? Third, there is diffusion uncertainty regarding the eventual users and market acceptance of the innovation. Finally, there is uncertainty about possible government regulatory action that the new product or process might face. These regulations might deal with safety, reliability, or the environment. The pace of technological change can vary across industries, firms, and countries, depending upon the resources devoted to research and the nature of products or processes pursued. For instance, the electronics industry, by its nature, has more room for technological improvement than, say, the paper industry. Governments try to increase the rate of technological change by various means. These measures include directly engaging in research, providing research subsidies or tax breaks, inviting foreign investment (and consequently technology) in specific industries, and strengthening the laws for protecting intellectual property. Sometimes, however, governments have to monitor the introduction of new products and processes to ensure societal well-being. Examples of such cases include drug-testing regulation and testing for the environmental impacts of new technologies before they are introduced in the market. In closing, our understanding of the process of technological change has improved over time. Technological change is an important input to a country’s economic growth, and we owe a large part of our improving living standards to changing technologies. Some technologies, however, can have undesirable side effects. Another issue is that technological progress across nations is uneven, and the rapid diffusion of new technologies from developed nations to developing nations remains a challenge. SEE ALSO Growth Accounting; Physical Capital; Production; Schumpeter, Joseph; Solow Residual, The; Technology; Technology, Transfer of Dasgupta, Partha, and Paul Stoneman, eds. 1987. Economic Policy and Technological Performance Cambridge, U.K.: Cambridge University Press. Goel, Rajeev K. 1999. Economic Models of Technological Change. Westport, CT: Quorum. Kamien, Morton I., and Nancy L. Schwartz. 1982. Market Structure and Innovation. Cambridge, U.K.: Cambridge University Press. Nelson, Richard R., and Sidney G. Winter. 1982. An Evolutionary Theory of Economic Change. Cambridge, MA: Belknap. Reinganum, Jennifer F. 1989. The Timing of Innovation: Research, Development, and Diffusion. In Handbook of Industrial Organization, ed. Richard Schmalensee and Robert Willig, 849-908. New York: Elsevier. Sato, Ryuzo, and M. J. Beckmann. 1968. Neutral Inventions and Production Functions. Review of Economic Studies 35 (1): 57-66. Schumpeter, Joseph. 1950. Capitalism, Socialism, and Democracy. 3rd ed. New York: Harper. Rajeev K. Goel "Change, Technological." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (November 20, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/change-technological "Change, Technological." International Encyclopedia of the Social Sciences. . Retrieved November 20, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/change-technological The Spirit of Technology. In the mid nineteenth century the United States witnessed dynamic technological changes that were brought on by the growth of business and industry. European countries, linked in a competitive capitalistic market with Americans, also experienced technological advances. In 1851 the first modern world’s fair, the London Crystal Palace Exhibition, was held to display new discoveries and inventions. American products, while representing a small proportion of the exhibits at the fair, made a strong impression on European visitors. Cyrus McCormick’s reaper, Samuel Colt’s revolver, Gail Borden’s dehydrated “meat biscuit,” and Charles Goodyear’s vulcanized rubber were some of the most popular American attractions. So successful was the London exhibition that in 1853 the United States hosted its own world’s fair in New York City to display the revolutionary pace of American technological progress to visiting Europeans as well as to the American public. Americans and Europeans alike celebrated technology, viewing it, like much else in their changing world, with optimism. Technology and Agriculture. American innovations in agriculture had a profound impact on the lives of farmers, especially in the new agricultural lands that were opening in the West. The chilled-iron plow, patented in 1868 by James Oliver, helped farmers break up the dry, hard prairie soil, while the gang plow, which had wheels, allowed the operator to ride on the machine. During the 1860s and 1870s harrows, which broke up and smoothed the soil, and grain drills, which scattered grain, were improved. By the 1870s the straddle-row cultivator had become popular; riding on the cultivator and operating the attached shovels with his feet, the farmer could cover twice as much acreage as was possible with the one-horse plow. The most important labor-saving device was the agricultural binder. The reaper, invented by Cyrus McCormick in 1834, had mechanized the cutting of wheat, but manual labor was still required to collect and bind the cut product; the binder, introduced by John E. Heath in 1850 and later improved by John F. Appleby and other inventors, mechanized these processes as well, resulting in a significant expansion in American wheat production. Transportation. Between 1850 and 1860 railroad mileage more than tripled in the United States, and by 1860 it surpassed that of any other country in the world. In 1852 the Mississippi River was crossed by a railroad for the first time, and a year later Congress approved funds for an army expedition to select the best route for a transcontinental rail line. Sectional tensions prevented the building of the line until the 1860s, and in 1869 the first railroad extending from one coast to the other was completed when the east-to-west and west-to-east sections met at Promontory Point, Utah. Coal-burning locomotives began to replace wood-burning ones in the 1850s, and the introduction of the Pullman Luxury Car in 1858 and the Pullman Hotel Car in 1867 made train travel more appealing to Americans. Safety increased with the advent of the air brake, invented in 1868 by George Westinghouse, and the adoption of automatic signal systems to avoid accidents between trains using the same tracks. Finally, iron and steel bridges began to replace wooden ones, further increasing safety as well as holding down costs. The first all-iron railroad bridge was built in 1845, and in 1851 the engineer John A. Roebling designed the Niagara Suspension Bridge. Construction of the Brooklyn Bridge, which accommodated both road and train traffic, began in 1869 and was completed in 1883. Technology and the City. Technology helped foster the growth of American cities between 1850 and 1877. The first streetcar began operation in New York City in 1852; in 1873 the first cable car appeared in San Francisco; in 1864 an engineer, Hugh B. Wilson, proposed the construction of a subway in New York (the first subway system, however, would open in the 1890s in Boston). Elisha Graves Otis invented the passenger elevator in 1852; following further improvements, the elevator would become a central component of the skyscrapers that would be built in the 1880s and 1890s. Even in the 1860s and 1870s, however, engineers were using steel to build stronger and taller buildings. The Bessemer process, developed in the 1850s as a cheap and efficient means of making steel from iron, led to a large increase in steel production and helped foster this change. Telegraph and Telephone. In 1844 Samuel F. B. Morse revolutionized American communications with the perfection of the telegraph. In the 1850s Morse and other inventors, especially Cyrus W. Field, promoted the development of a cable that would allow telegraph signals to be transmitted across the Atlantic Ocean. When the transatlantic cable was completed in August 1858, Queen Victoria cabled President James Buchanan of her hope that “the electric cable which now connects Great Britain with the United States will prove an additional link of friendship between the nations.” Within a few weeks, however, the cable had lost its ability to transmit signals, and the outbreak of the Civil War delayed repairs. After 1865 new cables were laid, connecting the United States, Britain, and France; for the next fifty years ocean cables served as the quickest means of overseas communications. In 1873Alexander Graham Bell arrived in Boston from Scotland and began looking for a way to transmit sounds through a telegraph wire. Bell and his assistant, Thomas A. Watson, conducted many experiments with pairs of telegraph instruments, and on 2 June 1875 Bell heard the vibrations of Watson’s finger through the wire. Finally, on 10 March 1876 Bell communicated the first vocal message over an electric wire: “Watson, come here, I want you.” By that time Bell had already received a patent for his invention: the telephone. Electrical Innovations. In the 1860s and 1870s European and American inventors began to explore the field of electric lighting. Electric dynamos, which had been created in the 1830s and 1840s by Joseph Henry, Thomas Davenport, and Charles G. Page, provided the energy for these experiments. Before 1880 at least nineteen electric lamps had been perfected by Europeans and Americans including the arc lamp, which was used in street lighting. The most profound impact on the lighting industry, however, was made by Thomas Alva Edison, a prolific inventor who before 1877 had created an electric voting machine; the mimeograph; and the “quadraplex,” by means of which four telegraph messages could be sent through a single wire. In 1876 Edison established what he called a “scientific” factory at Menlo Park, New Jersey, where, with fifteen assistants, he tried to create “a minor invention every ten days and a big thing every six months or so.” In 1878 Edison would transform his factory into the Edison Electric Light Company, and in 1879 he would perfect the incandescent lightbulb. Technology and Science. Most Americans of the mid nineteenth century viewed technology as the practical outcome of modern science. The proliferation of technological innovations during the period contributed to this perception, as did the public declarations of some American scientists. Although the president of Rensselaer Polytechnical Institute in Troy, New York, proclaimed in 1855 that “science has cast its illuminating rays on every process of Industrial Art,” most technologists depended less on scientific theory than on trial-and-error experimentation and intuition. This divergence was, however, less marked in some areas than others. Developments in the field of electricity were based on scientific theories dating back to the discovery of electromagnetism and the work of such scientists as Joseph Henry and Michael Faraday. Moreover, technology became increasingly linked to science in the middle of the century as scientific education became more advanced and the number of technical schools increased. In the 1840s and 1850s Yale, Harvard, and other Ivy League institutions established “scientific schools” that stressed engineering and technology, and the Massachusetts Institute of Technology was founded in 1861. Technology, like medicine, responded to an industrializing society by becoming more scientific and more professional. Kendall A. Birr, “Science in American Industry,” in Science and Society in the United States, edited by David D. VanTassel and Michael G. Hall (Homewood, III.: Dorsey Press, 1966), pp. 35-80; Robert V. Bruce, The Launching of Modern American Science, 1846-1876 (New York: Knopf, 1987); John W. Oliver, History of American Technology (New York: Ronald Press, 1956). "Technological Change." American Eras. . Encyclopedia.com. (November 20, 2017). http://www.encyclopedia.com/history/news-wires-white-papers-and-books/technological-change "Technological Change." American Eras. . Retrieved November 20, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/news-wires-white-papers-and-books/technological-change
X-linked hypophosphatemia (XLH) is a genetic disorder that causes low levels of phosphorus in the blood and affects bones, muscles, and teeth. XLH causes an excess amount of phosphate to be eliminated by the kidneys in the urine (otherwise known as phosphate wasting or hypophosphatemia). Phosphate is a mineral that is important for the normal formation of bones and teeth. XLH is typically an inherited disease. “X”- linked means that the disease is due to a defect in the PHEX gene on the X chromosome and can be passed to offspring. In rare cases, the disease is not inherited from a parent and is due to a new defect in the gene. The PHEX gene is involved in phosphate regulation in the body through the fibroblast growth factor 23 (FGF23) protein. XLH causes there to be an increased level of FGF23. Too much FGF23 prevents the body from retaining enough phosphate and results in low phosphorus levels (hypophosphatemia). Over time, hypophosphatemia leads to soft, weak bones that are more likely to fracture. Bones in the hips, legs, and feet are more prone to fracture which can result in pain and difficulty moving. XLH can also be referred to as other names including hereditary hypophosphatemic rickets, vitamin-D resistant rickets, or genetic rickets. XLH differs from other types of rickets because it cannot be treated by increasing vitamin D alone.
How do different types of mutations affect genes and the corresponding mRNAs and proteins? - Describe how duplications, deletions, inversions, and translocations can affect gene function, gene expression, and genetic recombination. Describe the same for transposable elements. - Describe how mutations arise and how environmental factors can increase mutation rate. - Cite examples of mutations that can be beneficial to organisms. - Interpret results from experiments to distinguish between different types of DNA rearrangements. - Distinguish between loss of function and gain of function mutations and their potential phenotypic consequences. - Predict the most likely effects on protein structure and function of null, reduction-of-function, overexpression, dominant-negative and gain-of-function mutations. - Compare the role of both loss and gain of function mutations in the origin of tumors - A clicker-based case study that untangles student thinking about the processes in the central dogma - Building a Model of Tumorigenesis: A small group activity for a cancer biology/cell biology course - Discovering Prokaryotic Gene Regulation by Building and Investigating a Computational Model of the lac Operon - Discovering Prokaryotic Gene Regulation with Simulations of the trp Operon - Exploration of the Human Genome by Investigation of Personalized SNPs - Follow the Sulfur: Using Yeast Mutants to Study a Metabolic Pathway - Homologous chromosomes? Exploring human sex chromosomes, sex determination and sex reversal using bioinformatics approaches - Linking Genotype to Phenotype: The Effect of a Mutation in Gibberellic Acid Production on Plant Germination - Predicting and classifying effects of insertion and deletion mutations on protein coding regions - Using computational molecular modeling software to demonstrate how DNA mutations cause phenotypes
Social Studies teachers often teach in isolation from the other content areas, but cross-curricular content helps students see the connection between class work and their everyday lives. Science and Social Studies content often overlaps; for instance, when addressing standards around human impact on the environment or the impact of weather patterns and geological events on people. How do governments and people prepare for these events? How does policy affect our planet? How can drought lead to conflict? The possibilities are endless. If you’re looking to connect social studies and science content, check out these useful resources: Students can take virtual tours of notable places thanks to the information and images posted on the National Park Service’s website. Students can read about landmarks and study the geography of different regions of the United States. It’s a great resource for combining a study of different landforms in science class to a deep dive into the history of a place in social studies class. Channel One News’ coverage of current events often demonstrates connections between science and social studies. Discussion questions, writing prompts and assessments draw students to analyze cause and effect relationships or judge the merits of proposed solutions to problems. Teachers and students can use the keyword search function at the top of each page to search for content related to science, health or geography topics, and much more. The National Oceanic and Atmospheric Administration has a website full of information for researchers. Students searching for topics to study on different spaces and places can find lots of interesting facts and details. The information on this site connects to both social studies and science instruction. NOAA’s website has pages dedicated to coastlines, fisheries, satellites and more. The iNaturalist website and app for mobile devices is a great way to connect science and social studies instruction. Students can record observations in nature and keep track in a digital field notebook. As students explore a community garden, historical site or landmark, they can also keep track of the animals and plant life they see during their exploration. The US Geological Survey has a terrific website that gives students access to maps and updated information on a variety of topics. The education section of their website includes links to online lectures and animations. Young people can also access featured topics and an area dedicated to resources for middle and high school students. Share some of the ways that you have connected social studies and science instruction in your classroom! Monica Burns is an EdTech & Curriculum Consultant and Apple Distinguished Educator. Visit her site ClassTechTips.comSee also:
The Earth makes a sound, quite literally. Different from things like winds rustling or waves crashing on the beach, the actual planet emits a permanent, low-frequency hum underneath the ceaseless noise the rest of its inhabitants produce. This droning sound cannot be heard by the human ear, but has been going on since the Earth was born. Now, scientists have managed to measure this humming, Live Science reports. Most movements that occur deep down under the Earth are not big enough for humans to feel them, with the exception of earthquakes. But the planet actually undergoes far more earthquakes globally than what most people know – as much as 500,000 tremors per year, according to the US Geological Survey. Of those, only around 100,000 are strong enough for those on the ground to feel, and a mere 100 are powerful enough to cause enough damage to structures. However, even without any earthquakes happening, there’s still a lot going on in the Earth’s outer layer. Since the 1990s, scientists have been studying Earth’s constant vibrations caused by microseismic activity, known as “free oscillation.” Free oscillation generates a hum that can be detected anywhere on land by seismometers. This constant vibrating sound has perplexed researchers for years, with some suggesting that the ebb and flow of ocean waves reaching down to the bottom of the sea were responsible for it, while others thought the sound was made by ocean waves colliding. In 2015, scientists found that both types of ocean movements actually contributed to Earth’s hum. Seismologists have recorded and measured the sound on land, and have now captured the sonic patterns from the depths of the seafloor. Scientists used spherical ocean seismometers to measure the vibration in the Indian Ocean. Between 2012 and 2013, 57 free-fall seismometers were deployed around La Réunion Island to the east of Madagascar, over an area measuring about 772 square miles. The scientists then isolated the sound created by ocean wave motions and currents, and found “very clear peaks” over 11 months, which had the same amplitude range as measurements taken on land in Algeria. This new development will contribute towards mapping more of the Earth’s interior. The study was published in Geophysical Research Letters.
According to experts, 80% of learning is visual, which means that if your child is having difficulty seeing clearly, his or her learning can be affected. This also goes for infants who develop and learn about the world around them through their sense of sight. To ensure that your children have the visual resources they need to grow and develop normally, their eyes and vision should be checked by an eye doctor at certain stages of their development. According to the American Optometric Association (AOA) children should have their eyes examined by an eye doctor at 6 months, 3 years, at the start of school, and then at least every 2 years following. If there are any signs that there may be a vision problem or if the child has certain risk factors (such as developmental delays, premature birth, crossed or lazy eyes, family history or previous injuries) more frequent exams are recommended. A child that wears eyeglasses or contact lenses should have his or her eyes examined yearly. Children’s eyes can change rapidly as they grow. Eye Exams in Infants: Birth - 24 Months A baby’s visual system develops gradually over the first few months of life. They have to learn to focus and move their eyes, and use them together as a team. The brain also needs to learn how to process the visual information from the eyes to understand and interact with the world. With the development of eyesight, comes also the foundation for motor development such as crawling, walking and hand-eye coordination. You can ensure that your baby is reaching milestones by keeping an eye on what is happening with your infant’s development and by ensuring that you schedule a comprehensive infant eye exam at 6 months. At this exam, the eye doctor will check that the child is seeing properly and developing on track and look for conditions that could impair eye health or vision (such as strabismus(misalignment or crossing of the eyes), farsightedness, nearsightedness, or astigmatism). Since there is a higher risk of eye and vision problems if your infant was born premature or is showing signs of developmental delay, your eye doctor may require more frequent visits to keep watch on his or her progress. Eye Exams in Preschool Children: 2-5 The toddler and preschool age is a period where children experience drastic growth in intellectual and motor skills. During this time they will develop the fine motor skills, hand-eye coordination and perceptual abilities that will prepare them to read and write, play sports and participate in creative activities such as drawing, sculpting or building. This is all dependent upon good vision and visual processes. This is the age when parents should be on the lookout for signs of lazy eye (amblyopia) - when one eye doesn’t see clearly, or crossed eyes (strabismus) - when one or both eyes turns inward or outward. The earlier these conditions are treated, the higher the success rate. Parents should also be aware of any developmental delays having to do with object, number or letter recognition, color recognition or coordination, as the root of such problems can often be visual. If you notice your child squinting, rubbing his eyes frequently, sitting very close to the tv or reading material, or generally avoiding activities such as puzzles or coloring, it is worth a trip to the eye doctor. Eye Exams in School-Aged Children: Ages 6-18 Undetected or uncorrected vision problems can cause children and teens to suffer academically, socially, athletically and personally. If your child is having trouble in school or afterschool activities there could be an underlying vision problem. Proper learning, motor development, reading, and many other skills are dependent upon not only good vision, but also the ability of your eyes to work together. Children that have problems with focusing, reading, teaming their eyes or hand-eye coordination will often experience frustration, and may exhibit behavioral problems as well. Often they don’t know that the vision they are experiencing is abnormal, so they aren’t able to express that they need help. In addition to the symptoms written above, signs of vision problems in older children include: - Short attention span - Frequent blinking - Avoiding reading - Tilting the head to one side - Losing their place often while reading - Double vision - Poor reading comprehension The Eye Exam In addition to basic visual acuity (distance and near vision) an eye exam may assess the following visual skills that are required for learning and mobility: - Binocular vision: how the eyes work together as a team - Peripheral Vision - Color Vision - Hand-eye Coordination The doctor will also examine the area around the eye and inside the eye to check for any eye diseases or health conditions. You should tell the doctor any relevant personal history of your child such as a premature birth, developmental delays, family history of eye problems, eye injuries or medications the child is taking. This would also be the time to address any concerns or issues your child has that might indicate a vision problem. If the eye doctor does determine that your child has a vision problem, they may discuss a number of therapeutic options such as eyeglasses or contact lenses, or an eye patch depending on the condition and the doctor’s specialty. Since some conditions are much easier to treat when they are caught early while the eyes are still developing, it is important to diagnose any eye and vision issues as early as possible. Following the guidelines for children’s eye exams and staying alert to any signs of vision problems can help your child to reach his or her potential.
Bacterial labyrinthitis; Serous labyrinthitis Labyrinthitis is an ear disorder characterized by inflammation (irritation and swelling with presence of extra immune cells) of the canals of the inner ear (semicircular canals, labyrinth), which causes dizziness. Causes, incidence, and risk factors The cause of labyrinthitis is unknown, but because it commonly occurs following otitis media (ear infection) or an upper respiratory infection (URI), it is thought to be a consequence of viral or bacterial infection. It may also follow allergy, cholesteatoma, or the ingestion of certain drugs that are toxic to the inner ear. The semicircular canals of the inner ear (labyrinth) become inflamed. This disrupts their function, including the regulation of balance. Risk factors include the following: - recent viral illness, respiratory infection, or ear infection - use of prescription or nonprescription drugs (especially aspirin) - a history of allergy, smoking, or heavy alcohol consumption. - abnormal sensation of movement (vertigo) o may be accompanied by nausea and vomiting o may be severe o may be continuous for up to a week at a time o severe episodes may be followed by transient episodes for several weeks - loss of balance, especially falling toward the affected side - hearing loss in the affected ear - ringing or other noises in the ears (tinnitus) - involuntary eye movements (nystagmus) Signs and tests An ear examination may not reveal any changes. Differentiation from other causes of dizziness or vertigo may include: - head CT scan or MRI scan - hearing testing (audiology/audiometry) - caloric stimulation (tests reflexes of the eye) - EEG, evoked auditory potential studies Labyrinthitis usually runs its course over a few weeks. However, symptoms may need treatment. Your doctor may prescribe an antibiotic to treat the infection. Medications that may reduce symptoms include the following: - anti-emetics (antinausea medications) To prevent worsening of symptoms during episodes of labyrinthitis, try the following: - Keep still and rest during attacks. - Gradually resume activity. - Avoid sudden position changes. - Do not try to read during attacks. - Avoid bright lights. Assistance with walking may be needed during attacks. Avoid hazardous activities such as driving, operating heavy machinery, and climbing until one week after symptoms have disappeared. Recovery is usually spontaneous and hearing usually returns to normal. - injury to self or others during attacks of vertigo - permanent hearing loss in the affected ear (rare) - spread of inflammation to other ear areas or to the brain (rare) Calling your health care provider Call your health care provider if dizziness, vertigo, loss of balance, or other symptoms of labyrinthitis are present. Also call if hearing loss occurs. Urgent or emergency symptoms include convulsions, fainting, persistent vomiting, or vertigo accompanied by fever of more than 101 degrees Fahrenheit. Prompt treatment of respiratory infections and ear infections may help prevent labyrinthitis. by Armen E. Martirosyan, M.D. All ArmMed Media material is provided for information only and is neither advice nor a substitute for proper medical care. Consult a qualified healthcare professional who understands your particular history for individual concerns.
May 10, 2003 Prelaunch at Kennedy Space Center On Mars Exploration Rover 1 (MER-1) , air bags are installed on the lander. The airbags will inflate to cushion the landing of the spacecraft on the surface of Mars. When it stops bouncing and rolling, the airbags will deflate and retract, the petals will open to bring the lander to an upright position, and the rover will be exposed. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.
BY DAVID PLOTKIN, ANTIC CONTRIBUTING EDITOR New Owners Column Lesson 14: Sound If you play computer games, you know what kind of sounds your computer can make. Sound is an important way to hold the user's attention. Entirely silent games soon lose their appeal. While machine language is required for really complex soundmaking-such as you'd find in Music Studio (Activision) or Music Construction Set (Electronic Arts)-there's quite a bit you can do with Atari BASIC. You can create simple, constant sounds that give your program atmosphere without slowing it down. The SOUND command is passed to the POKEY chip, a special sound chip in your Atari which also handles the serial I/O bus and the keyboard. The sound you create, say a note or hiss, will play until you turn it off with another SOUND command. Because the sound chip is separate from the main processor, your BASIC program's speed will not be greatly affected by whether the sound is on or off. Your Atari can produce four sounds at once because it has four independent voices (sound channels). Normally each voice has a range of 256 different frequencies (or tones, notes, pitches). Figure 1 shows how these frequency values correspond to the standard musical scale. The available frequencies stretch over five octaves. Each voice has 16 different volume (loudness) levels, from a whisper to a roar. Finally, there are eight different levels of distortion to choose from. While your Atari can play pure musical notes, it can also make other sounds, such as a low rumble or a high-speed "engine" noise. The various distortions available can be combined to produce some very interesting noises. The simplest way to produce noise on your Atari is the SOUND command, which is used in the following format: SOUND voice, frequency, distortion, volume Voice is represented by a number between 0 and 3. Frequency is the pitch of the note you want to play, 0 to 255 as shown in Figure 1. When the frequency number increases, the note gets lower. Distortion values must be even numbers between 0 and 14. Distortion value 0 is a rumble, 2 and 6 sound like a racing car engine, 4 sounds like heavy machinery or an idling engine, 8 is like a rocket, 10 and 14 are pure musical notes, while 12 sounds like a high-speed engine. Volume can be between 0 and 15, with 0 being off. If you use more than one voice, try not to let the sum of the volumes exceed 32, or else the sound quality will deteriorate. POKE YOUR SOUND You can also use POKEs to control the sound registers directly POKE works much faster than SOUND, so you have more control over your sound effects. Sound registers are memory locations which control the same properties as the SOUND command: Memory Location Function 53760 Frequency of voice 1 (SOUND 0) 53761 Distortion and volume of voice 1 53762 Frequency of voice 2 (SOUND 1) 53763 Distortion and volume of voice 2 53764 Frequency of voice 3 (SOUND 2) 53765 Distortion and volume of voice 3 53766 Frequency of voice 4 (SOUND 3) 53767 Distortion and volume of voice 4 The even-numbered memory locations control the frequency of the sound, which is identical to the second number in the SOUND statement. For example, SOUND 0,100,10,8 is the same as POKE 53760,100. The odd-numbered memory locations (5 3761,63,65,67) take care of the distortion and volume for each voice, using this formula: Here DISTORTION is the third number in the SOUND statement and VOLUME is the fourth. Therefore the equivalent POKE in our example is 16 * 10 + 8, or 168. So to duplicate the above SOUND command, type POKE 53760,100:POKE 53761,168. You can turn off a note by placing a zero in either FREQUENCY or the DISTORTION/VOLUME registers. Listing 1 is a sound organ. Type in Listing 1, NEWOWN14.BAS, check it with TYPO II and SAVE a copy to disk before you RUN it. The onscreen display will show you which keys should be pressed to play a musical scale. The program continuously executes its loop, counting and reading keys and keeping track of which voices are available. All the while, the sounds you have fingered are playing. If you want to try different sounds, change the note values in the DATA statements and use other notes from Figure 1. Listing 2, SOUNDMEN.BAS , gives you a menu from which you can choose a sound effect. Some of these sounds are quite complex and can be astonishingly realistic. Such sounds are achieved by rapidly varying the frequency, distortion, and volume in the SOUND statements. This technique ties up the main 6502 processor chip, bringing other computing pretty much to a halt. that give our program slowing it down. Experiment with varying the numbers in SOUND statements to get complex custom sounds of your own. And congratulations on graduating from the New Owners Column. These lessons should give you a good start in BASIC programming on the 8-bit Atari computers. (For more details on programming Atari sound, a good sourcebook is De Re Atari. Originally published by Atari, copies of this out-of-print reference guide are often available from Antic mail-order advertisers. -ANTIC ED) IF YOU'D ENJOY SEEING MORE ARTICLES LIKE THIS ONE, CIRCLE 209 ON THE READER SERVICE CARD.
- Resource Centers A new study has shown that children who are overweight before the age of 12 are more likely to be overweight by the time they reach age 12. The study included 1000 US children born in 1991?around the time the "obesity epidemic" started gaining attention. Researchers took measurements of the children at 7 different times in their childhood: at 2 years, at 3 years, at 4? years, at 7 years, at 9 years, at 11 years, and at age 12. They found that the more times a child was recorded as being medically overweight, the more likely he or she was to be overweight at age 12. One overweight measurement meant the child was 25 times more likely to be overweight; 3 overweight measurements made the child 374 times more likely to be overweight. Philip K. Nader, MD, of the University of California, San Diego School of Medicine, said, "These results suggest that any time a child reaches the 85th percentile for BMI [body mass index] may be an appropriate time for an intervention." Percentiles and their corresponding weight levels are listed in the following table from the Centers for Disease Control and Prevention: The 95th percentile: Risk Factors for Overweight/Obese Children and Adolescents *One study showed that approximately 60% of overweight children had at least one cardiovascular risk factor, such as high cholesterol or high blood pressure; in comparison, only 10% of children with healthy weight had at least one risk factor. Additionally, 25% of overweight children had 2 or more risk factors. Ms. Farley is a freelance medical writer based in Wakefield, RI.
These clips show Cog orienting its head and neck to a visual stimulus. The eyes are moving to look at moving objects in the field of view. Whenever the eyes move to their new target, the neck moves to point the head toward that same target. The first clip is from the new revision of Cog's head. This orientation clip is from Cog's old head. Human eyes move as the result of one of four mechanisms. Two of these mechanisms are under voluntary control (saccadic movements, and smooth pursuit movements), while two are under involuntary control (the vestibulo-occular reflex and non-stabilizing micromovements). The saccadic movements are high-speed movements that cause the eye to jump to a new location approximately three times per second This short clip shows the first tests of the new active vision system as it saccades to random positions. Three of these heads were developed, one for Cog, and two to serve as desktop development platforms. The eyes are travelling at about one-half their maximum velocity. Saccade to Motion In this clip, Cog has been programmed to attend to moving objects. This motion detection operates by subtracting consecutive images, and the using region growing to identify boundaries of moving objects. In this video clip, you can see the eyes saccade to the moving stuffed animal. The second type of voluntary eye motion is smooth pursuit tracking. This clip shows Cog smoothly tracking a moving object that was placed in front of it. The tracking uses a correlation based metric to determine where the desired object has moved in the visual One of the involuntary eye movements is the vestibulo-occular reflex. This reflex serves to keep the eyes fixed on a target while the head moves (or is moved). In humans, this relfex is accomplished by two systems:a very tight feedback loop from the vestibular system to the eye muscles, which is active at high velocities, and a measurement of visual slip which is active at slow velocities. We have implemented the high velocity vestibular reflex on Cog. Using two rate gyroscopes, we can measure the angular velocity of the head and move the eyes to compensate for that motion. The clip below first shows the head being moved without the vestibular reflex. Notice that the eyes move back and forth with the head as it moves. The second part of the clip shows the head being moved with the vestibular reflex intact. Notice that instead of moving with the head, the eyes continue to point straight ahead regardless of how the head is moved. One visual task that infants are very good at is face detection. The face detection routine shown here was developed based on the ratio template work of Pawan Sinha. This clip shows the output of the face detection module. On the right side of the clip is the live video stream. On the left side is the same image, but with detected faces outlined. A red outline indicates a better match than a green outline. Notice that the face detection software is not sensitive to face motion, as can be seen when the face is occluded by the circular mountain picture shown in this clip. We would like Cog to be able to tell if someone is making eye contact with it. Using the face detection routines described above, we first locate a face in the peripheral camera. Using a learned sensory-motor mapping, Cog moves its eyes to look at that person. We then can use a second learned sensory-motor mapping to extract an image of that person's eyes. This clip shows the image processing steps used to find eyes. The upper right shows the raw video image and the upper left shows the outlined face images. The lower left is the prefilter results, and the lower right shows the extracted image of the eye. The image of the eye is not stable because the person in the video is moving almost One of the long range tasks that we would like the robot to be able to perform is to imitate gestures and motion. The following clips show a very simple example of imitation of head motion. The output from the face detection module is passed to a tracking module, which then characterizes head motions as being either horiztonal "no" motions or vertical "yes" motions. The first clip shows one of the small active vision development platforms imitating head motions. Notice that the head responds only to a head nodding; similar motions with non-face stimuli do not provoke a response. The second clip shows Cog imitating the head motions of a person. The third clip shows Cog imitating the head motions of a toy cow. The stuffed animal is detected as a face, and the robot responds to it in the same way that it responds to a The fourth clip shows Cog imitating a second stuffed animal (Mickey Mouse). The robot only responds to the face of the toy, not to the motion of the toy.
We're going to explore what happens when we change the angle that generates the spiral. Play with the Angle slider in the Spiralizer applet below: At very low angles, the applet creates a simple spiral. But as you increase the angle, all sorts of interesting patterns emerge. It can become difficult to determine the order of the dots. Click on "Connect Dots" to make the connections easier to see. If you set the angle to 180 degrees, the point will rotate to the other side, and then back again at the next iteration, and so on, oscillating with a period of 2. If you set the angle to be 90 degrees, The dots will grow in a square pattern, that is, with a period of 4. The periodicity can be determined by dividing the angle of a full circle, 360 degrees, by the rotation angle. For example, 360 / 90 = 4. The progression of the points is just 0, 90, 180, 270 degrees. Then it returns to 360, which is the same as 0 degrees, and the pattern begins again. There is another angle that will generate period-4 patterns: 270 degrees. This is 360 degrees - 90 degrees. We can see why this works by examining the progression of the angles. Start with iteration 0 at at 0 degrees. The first iteration takes the point 3/4 of the way around the circle or 270 degrees. Then the second iteration adds 270 degrees, which is 540 degrees. This is the same as 360 + 180, so it's halfway around the circle. The next iteration takes the point to 810 degrees, which is the same angle as 720 + 90, or 2 and 1/4 times around the circle. In general, the periodicity P = 360 / A, where is the angle around the circle. If we want to find a specific periodicity, we can rearrange the equation to solve for A = 360 / P. One interesting thing to observe is that when the angle is set exactly to a value that repeats perfectly, such as 90 degrees, the points line up in straight lines. But if you adjust the angle just a tiny bit larger or smaller than one of these perfect values, then the dots start to twist into spirals. You can nudge the angle 0.1 degrees at a time with the arrow keys to see the effect of perturbing the angle just slightly. There are many other higher-order periodicities that emerge in between the simple angles that create triangles, square, pentagons and hexagons, and we will explore the more complex details in the next section, when we relate this behavior to the periodicities of the Mandelbrot Set. Fibonacci PackingThe Fibonacci Sequence appears in many plants, and can be seen in the distribution of the seeds in the sunflower below. A sunflower pattern can be created by a simple repetitive process similar to how the spiralizer forms its patterns. Imagine the flower operating like the Spiralizer. It creates a seed at the origin, and then it rotates by a certain angle and creates another seed. Then it rotates again by the same angle and forms a third seed. It keeps rotating by the same angle and adding seeds, which all keep growing in scale and distance from the center. How does the angle affect the outcome of the pattern of seeds? As we saw with the spiralizer, when you set the angle to be a simple fraction of the whole way around the circle, the dots (or seeds) line up in radial arms. This is a simple pattern, but it is NOT the most efficient way to pack a lot of seeds into a given area. You can see the percentage of space filled by dots at the bottom of the Spiralizer. For instance, when the angle of rotation is 90 degrees, the dots fall in a period-4 pattern, and the percentage of the space filled by dots is 11.436%. This is a relatively small proportion, and is a poor use of space. When the angle of rotation is not simple like 1/2, 1/3, or 1/4, but instead is an irrational fraction of the circle, then the angle never quite repeats itself, and much more complex patterns are possible. Sometimes these arrangements alloiw a much more efficient packing of seeds into a given area. When the angle of rotation is set to be the 360/φ that is, the fraction of the circle correspinding to the Golden Ratio, then the seeds pack in the most efficient way possible. Try it with the Spiralizer above! What is the "Golden Angle" for this optimal packing? 360 / 1.61803399 is approximately equal to 222.5 degrees. Set the angle in the Spiralizer to 222.5 (you might need to use the arrow keys to set it precisely), turn the drawing speed down about half way, and click "Connect Dots" to illustrate how a sunflower arranges its seeds. One amazing thing to observe is that in systems that use this packing system (many flowers, pinecones, strawberries, pineapples, artichokes, etc) you can often find the Fibonacci Sequence. This is illustrated in the picture of the sunflower above. The pattern forms intersecting spirals where the number of seeds in the counter-clockwise spiral is part of the Fibonacci Sequence, and the number of seeds in the clockwise spiral is the next highest Fibonacci Number. This is often difficult to count, and sometimes it is not exactly correct, but in general the ratio of the seeds counted in one direction to the number of seeds in the other direction is close to the Golden Ratio φ. <- PREVIOUS NEXT -> © Fractal Foundation.
This documentary dealt with a species of fox living in Israel. The program was presented by scientists who observed the foxes in their natural habitats, in a semi-desert environment, and made an evolutionary claim about the origin of birds. In one scene, where foxes stole and ate the young from a bird"s nest, it was suggested that the alleged evolution of birds was as old as the time of reptiles and dinosaurs. One feature of this claim by a scientist who researches these foxes was particularly striking: the way that the theory of evolution was portrayed not as a hypothesis but as a proven, incontrovertible fact. The fact is that there exists no scientific evidence to support claims about the evolution of birds. The fact that these statements were put forward as definitive pronouncements stems from National Geographic TV"s own prejudices. In fact, the origin of birds, which cannot be explained in terms of evolution, deals a heavy blow to the theory. Birds possess an entirely different structure to that of reptiles, which are claimed to be their ancestors. Lungs which allow a one-way flow of air, wings and the hollow bones which make it possible for the animals to remain in the air by reducing their weight, are structures unique to birds, and are not found in reptiles. Furthermore, the bird wing and lung are structures which come together within a particular organizational system, and cannot function in the absence of any one of their parts. This feature, known as irreducible complexity, requires that all the components should be present at the same time, in a suitable organizational system and functioning flawlessly. Such a structure can only be explained by means of creation. In the same way that irreducible complexity clearly and definitively demonstrates creation, it also deals a lethal blow to the theory of evolution. That is because when one component is missing the organ in question is just a collection of cells unable to perform its function, and the theory of evolution envisages such structures becoming vestigial over time. The impossibility of irreducibly complex structures forming gradually, despite this process of vestigialisation, is apparent. That is because there is no mechanism in nature which can determine a suitable system of organization beforehand, preserve all the components except for one even if all these are ready, or which can await the completion of the missing part. Looking at all these facts it can be seen how mistaken it is to support claims concerning bird evolution. Irreducible complexity definitively demonstrates that the theory of evolution, which people still seek to keep alive although its invalidity became apparent years ago, and confirms the fact that God has created all living things by showing that the origin of living things can be explained in terms of intelligent design. In its subsequent broadcasts we advise National Geographic TV to give more objective and consistent accounts rather than providing fanatical support for the claims of Darwinism. (For further details on the origin of birds, see http://www.darwinismrefuted.com/natural_history_2_01.html )
NÎMES, capital of Gard department, S. France. Although a number of Jews took part in the revolt led by Hilderic, governor of Nîmes, against the Visigothic king Wamba in 673, there is no direct evidence that Jews were then living in the town itself. However, a community was established during the second half of the tenth century at the latest, and from 1009 there is documentary evidence of the existence of a synagogue. From the middle of the 11th century, the name Poium Judaicum was used to designate one of the seven hills enclosed within the wall of Nîmes (later Puech Juzieu, etc.; in 1970 the promenade of Mont-Duplan); the Jewish cemetery was situated there. Toward the close of the 11th century, an entire quarter of the town was known as Burgus Judaicus (later Bourg-Jézieu). At the beginning of the 13th century, the community appears to have consisted of about 100 families. Although a church synod held in Nîmes in about 1284 decreed severe measures against the Jews, the bishop of Nîmes, who had authority over the Jews of the town, was nevertheless able to protect them, even from King *Philip IV the Fair who had ordered the imprisonment of several Jews. But the bishop could not prevail against the royal expulsion order of 1306 which, in Nîmes as elsewhere, was accompanied by the confiscation of all their belongings. When the Jews returned to France in 1359, the Nîmes municipal council allocated them the Rue de Corrégerie Vieille (the modern Rue de l'Etoile). After being harassed by the Christians there, they obtained a new quarter in the Rue Caguensol (part of the Rue Guizot) and the Rue de la Jésutarie or Juiverie (Rue Fresque). Shortly afterward they moved yet again, to the Garrigues quarter. There the 1367 census recorded the only three houses in the town (out of a total of 1,400) that were owned by Jews. This community ceased to exist in 1394, after the general expulsion of the Jews from France. In a letter to *Abraham b. David of Posquières – who lived in Nîmes long enough to be sometimes named after that town–Moses b. Judah of Béziers stressed the superiority of the yeshivah of Nîmes over all the others in southern France, comparing it to "the interior of the Temple, the seat of the Sanhedrin, from where knowledge goes forth to Israel." Other than Abraham b. David, the only scholar of the town who is known is his uncle, Judah b. Abraham. The municipal library of Nîmes possesses a rich collection of medieval Hebrew manuscripts, several of French origin, in the French provinces; all these volumes were obtained from the Carthusians of Villeneuve-lès-Avignon. From the 17th century, some Jews of *Comtat Venaissin went to trade in Nîmes and a few of them attempted to settle there; the parlement of *Toulouse ordered them to leave in 1653 and again in 1679. From the end of the 17th century, the Jews obtained the right to buy and sell in Nîmes for three weeks or a month in every season. Even though this concession was abolished in 1745 and 1754, some Jews succeeded in settling in the town during the second half of the 18th century. The community of 30–40 families appointed a rabbi, Elie Espir from *Carpentras, and set up a small synagogue in a private house. After a split in the community in 1794, a new synagogue (which has been in use ever since) was built in the Rue Roussy, completed in 1796. During the Reign of Terror, three Jews of Nîmes were imprisoned; one of them was subsequently executed. In 1808, when the *consistories were established, the community was affiliated to the consistory of *Marseilles, Gross, Gal Jud, 395–9; J. Simon, in: REJ, 3 (1881), 225–37; idem, in: Nemausa, 2 (1884/85), 97–124; S. Kahn, Notice sur les Israélites de Nîmes (1901); idem, in: REJ, 67 (1914), 225–61; J. Vieilleville, Nîmes… (1941); H. Noël, in: Revue du Midi, 11 (1897), 182–91; B. Blumenkranz, Juifs et chrétiens… (1960), index; Z. Szajkowski, Analytical Franco-Jewish Gazetteer (1966), 190. Source: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved.
DNA nanotechnology is the design and manufacture of artificial nucleic acid structures for technological uses. In this field, nucleic acids are used as non-biological engineering materials for nanotechnology rather than as the carriers of genetic information in living cells. Researchers in the field have created static structures such as two- and three-dimensional crystal lattices, nanotubes, polyhedra, and arbitrary shapes, as well as functional devices such as molecular machines and DNA computers. The field is beginning to be used as a tool to solve basic science problems in structural biology and biophysics, including applications in crystallography and spectroscopy for protein structure determination. Potential applications in molecular scale electronics and nanomedicine are also being investigated. The conceptual foundation for DNA nanotechnology was first laid out by Nadrian Seeman in the early 1980s, and the field began to attract widespread interest in the mid-2000s. This use of nucleic acids is enabled by their strict base pairing rules, which cause only portions of strands with complementary base sequences to bind together to form strong, rigid double helix structures. This allows for the rational design of base sequences that will selectively assemble to form complex target structures with precisely controlled nanoscale features. A number of assembly methods are used to make these structures, including tile-based structures that assemble from smaller structures, folding structures using the DNA origami method, and dynamically reconfigurable structures using strand displacement techniques. While the field's name specifically references DNA, the same principles have been used with other types of nucleic acids as well, leading to the occasional use of the alternative name nucleic acid nanotechnology. Properties of nucleic acids Nanotechnology is often defined as the study of materials and devices with features on a scale below 100 nanometers. DNA nanotechnology, specifically, is an example of bottom-up molecular self-assembly, in which molecular components spontaneously organize into stable structures; the particular form of these structures is induced by the physical and chemical properties of the components selected by the designers. In DNA nanotechnology, the component materials are strands of nucleic acids such as DNA; these strands are often synthetic and are almost always used outside the context of a living cell. DNA is well-suited to nanoscale construction because the binding between two nucleic acid strands depends on simple base pairing rules which are well understood, and form the specific nanoscale structure of the nucleic acid double helix. These qualities make the assembly of nucleic acid structures easy to control through nucleic acid design. This property is absent in other materials used in nanotechnology, including proteins, for which protein design is very difficult, and nanoparticles, which lack the capability for specific assembly on their own. The structure of a nucleic acid molecule consists of a sequence of nucleotides distinguished by which nucleobase they contain. In DNA, the four bases present are adenine (A), cytosine (C), guanine (G), and thymine (T). Nucleic acids have the property that two molecules will only bind to each other to form a double helix if the two sequences are complementary, meaning that they form matching sequences of base pairs, with A only binding to T, and C only to G. Because the formation of correctly matched base pairs is energetically favorable, nucleic acid strands are expected in most cases to bind to each other in the conformation that maximizes the number of correctly paired bases. The sequences of bases in a system of strands thus determine the pattern of binding and the overall structure in an easily controllable way. In DNA nanotechnology, the base sequences of strands are rationally designed by researchers so that the base pairing interactions cause the strands to assemble in the desired conformation. While DNA is the dominant material used, structures incorporating other nucleic acids such as RNA and peptide nucleic acid (PNA) have also been constructed. DNA nanotechnology is sometimes divided into two overlapping subfields: structural DNA nanotechnology and dynamic DNA nanotechnology. Structural DNA nanotechnology, sometimes abbreviated as SDN, focuses on synthesizing and characterizing nucleic acid complexes and materials that assemble into a static, equilibrium end state. On the other hand, dynamic DNA nanotechnology focuses on complexes with useful non-equilibrium behavior such as the ability to reconfigure based on a chemical or physical stimulus. Some complexes, such as nucleic acid nanomechanical devices, combine features of both the structural and dynamic subfields. The complexes constructed in structural DNA nanotechnology use topologically branched nucleic acid structures containing junctions. (In contrast, most biological DNA exists as an unbranched double helix.) One of the simplest branched structures is a four-arm junction that consists of four individual DNA strands, portions of which are complementary in a specific pattern. Unlike in natural Holliday junctions, each arm in the artificial immobile four-arm junction has a different base sequence, causing the junction point to be fixed at a certain position. Multiple junctions can be combined in the same complex, such as in the widely used double-crossover (DX) motif, which contains two parallel double helical domains with individual strands crossing between the domains at two crossover points. Each crossover point is itself topologically a four-arm junction, but is constrained to a single orientation, as opposed to the flexible single four-arm junction, providing a rigidity that makes the DX motif suitable as a structural building block for larger DNA complexes. Dynamic DNA nanotechnology uses a mechanism called toehold-mediated strand displacement to allow the nucleic acid complexes to reconfigure in response to the addition of a new nucleic acid strand. In this reaction, the incoming strand binds to a single-stranded toehold region of a double-stranded complex, and then displaces one of the strands bound in the original complex through a branch migration process. The overall effect is that one of the strands in the complex is replaced with another one. In addition, reconfigurable structures and devices can be made using functional nucleic acids such as deoxyribozymes and ribozymes, which are capable of performing chemical reactions, and aptamers, which can bind to specific proteins or small molecules. Structural DNA nanotechnology Structural DNA nanotechnology, sometimes abbreviated as SDN, focuses on synthesizing and characterizing nucleic acid complexes and materials where the assembly has a static, equilibrium endpoint. The nucleic acid double helix has a robust, defined three-dimensional geometry that makes it possible to predict and design the structures of more complicated nucleic acid complexes. Many such structures have been created, including two- and three-dimensional structures, and periodic, aperiodic, and discrete structures. Small nucleic acid complexes can be equipped with sticky ends and combined into larger two-dimensional periodic lattices containing a specific tessellated pattern of the individual molecular tiles. The earliest example of this used double-crossover (DX) complexes as the basic tiles, each containing four sticky ends designed with sequences that caused the DX units to combine into periodic two-dimensional flat sheets that are essentially rigid two-dimensional crystals of DNA. Two-dimensional arrays have been made from other motifs as well, including the Holliday junction rhombus lattice, and various DX-based arrays making use of a double-cohesion scheme. The top two images at right show examples of tile-based periodic lattices. Two-dimensional arrays can be made to exhibit aperiodic structures whose assembly implements a specific algorithm, exhibiting one form of DNA computing. The DX tiles can have their sticky end sequences chosen so that they act as Wang tiles, allowing them to perform computation. A DX array whose assembly encodes an XOR operation has been demonstrated; this allows the DNA array to implement a cellular automaton that generates a fractal known as the Sierpinski gasket. The third image at right shows this type of array. Another system has the function of a binary counter, displaying a representation of increasing binary numbers as it grows. These results show that computation can be incorporated into the assembly of DNA arrays. DX arrays have been made to form hollow nanotubes 4–20 nm in diameter, essentially two-dimensional lattices which curve back upon themselves. These DNA nanotubes are somewhat similar in size and shape to carbon nanotubes, and while they lack the electrical conductance of carbon nanotubes, DNA nanotubes are more easily modified and connected to other structures. One of many schemes for constructing DNA nanotubes uses a lattice of curved DX tiles that curls around itself and closes into a tube. In an alternative method that allows the circumference to be specified in a simple, modular fashion using single-stranded tiles, the rigidity of the tube is an emergent property. The creation of three-dimensional lattices out of DNA was the earliest goal of DNA nanotechnology, but this proved to be one of the most difficult to realize. Success using a motif based on the concept of tensegrity, a balance between tension and compression forces, was finally reported in 2009. Researchers have synthesized a number of three-dimensional DNA complexes that each have the connectivity of a polyhedron, such as a cube or octahedron, meaning that the DNA duplexes trace the edges of a polyhedron with a DNA junction at each vertex. The earliest demonstrations of DNA polyhedra were very work-intensive, requiring multiple ligations and solid-phase synthesis steps to create catenated polyhedra. Subsequent work yielded polyhedra whose synthesis was much easier. These include a DNA octahedron made from a long single strand designed to fold into the correct conformation, and a tetrahedron that can be produced from four DNA strands in a single step, pictured at the top of this article. Nanostructures of arbitrary, non-regular shapes are usually made using the DNA origami method. These structures consist of a long, natural virus strand as a "scaffold", which is made to fold into the desired shape by computationally designed short "staple" strands. This method has the advantages of being easy to design, as the base sequence is predetermined by the scaffold strand sequence, and not requiring high strand purity and accurate stoichiometry, as most other DNA nanotechnology methods do. DNA origami was first demonstrated for two-dimensional shapes, such as a smiley face and a coarse map of the Western Hemisphere. Solid three-dimensional structures can be made by using parallel DNA helices arranged in a honeycomb pattern, and structures with two-dimensional faces can be made to fold into a hollow overall three-dimensional shape, akin to a cardboard box. These can be programmed to open and reveal or release a molecular cargo in response to a stimulus, making them potentially useful as programmable molecular cages. Nucleic acid structures can be made to incorporate molecules other than nucleic acids, sometimes called heteroelements, including proteins, metallic nanoparticles, quantum dots, and fullerenes. This allows the construction of materials and devices with a range of functionalities much greater than is possible with nucleic acids alone. The goal is to use the self-assembly of the nucleic acid structures to template the assembly of the nanoparticles hosted on them, controlling their position and in some cases orientation. Many of these schemes use a covalent attachment scheme, using oligonucleotides with amide or thiol functional groups as a chemical handle to bind the heteroelements. This covalent binding scheme has been used to arrange gold nanoparticles on a DX-based array, and to arrange streptavidin protein molecules into specific patterns on a DX array. A non-covalent hosting scheme using Dervan polyamides on a DX array was used to arrange streptavidin proteins in a specific pattern on a DX array. Carbon nanotubes have been hosted on DNA arrays in a pattern allowing the assembly to act as a molecular electronic device, a carbon nanotube field-effect transistor. In addition, there are nucleic acid metallization methods, in which the nucleic acid is replaced by a metal which assumes the general shape of the original nucleic acid structure, and schemes for using nucleic acid nanostructures as lithography masks, transferring their pattern into a solid surface. Dynamic DNA nanotechnology Dynamic DNA nanotechnology often makes use of toehold-mediated strand displacement reactions. In this example, the red strand binds to the single stranded toehold region on the green strand (region 1), and then in a branch migration process across region 2, the blue strand is displaced and freed from the complex. Reactions like these are used to dynamically reconfigure or assemble nucleic acid nanostructures. In addition, the red and blue strands can be used as signals in a molecular logic gate. Dynamic DNA nanotechnology focuses on creating nucleic acid systems with designed dynamic functionalities related to their overall structures, such as computation and mechanical motion. There is some overlap between structural and dynamic DNA nanotechnology, as structures can be formed through annealing and then reconfigured dynamically, or can be made to form dynamically in the first place. DNA complexes have been made that change their conformation upon some stimulus, making them one form of nanorobotics. These structures are initially formed in the same way as the static structures made in structural DNA nanotechnology, but are designed so that dynamic reconfiguration is possible after the initial assembly. The earliest such device made use of the transition between the B-DNA and Z-DNA forms to respond to a change in buffer conditions by undergoing a twisting motion. This reliance on buffer conditions, however, caused all devices to change state at the same time. Subsequent systems could change states based upon the presence of control strands, allowing multiple devices to be independently operated in solution. Some examples of such systems are a "molecular tweezers" design that has an open and a closed state, a device that could switch from a paranemic-crossover (PX) conformation to a double-junction (JX2) conformation, undergoing rotational motion in the process, and a two-dimensional array that could dynamically expand and contract in response to control strands. Structures have also been made that dynamically open or close, potentially acting as a molecular cage to release or reveal a functional cargo upon opening. DNA walkers are a class of nucleic acid nanomachines that exhibit directional motion along a linear track. A large number of schemes have been demonstrated. One strategy is to control the motion of the walker along the track using control strands that need to be manually added in sequence. Another approach is to make use of restriction enzymes or deoxyribozymes to cleave the strands and cause the walker to move forward, which has the advantage of running autonomously. A later system could walk upon a two-dimensional surface rather than a linear track, and demonstrated the ability to selectively pick up and move molecular cargo. Additionally, a linear walker has been demonstrated that performs DNA-templated synthesis as the walker advances along the track, allowing autonomous multistep chemical synthesis directed by the walker. The synthetic DNA walkers' function is similar to that of the proteins dynein and kinesin. Strand displacement cascades Cascades of strand displacement reactions can be used for either computational or structural purposes. An individual strand displacement reaction involves revealing a new sequence in response to the presence of some initiator strand. Many such reactions can be linked into a cascade where the newly revealed output sequence of one reaction can initiate another strand displacement reaction elsewhere. This in turn allows for the construction of chemical reaction networks with many components, exhibiting complex computational and information processing abilities. These cascades are made energetically favorable through the formation of new base pairs, and the entropy gain from disassembly reactions. Strand displacement cascades allow for isothermal operation of the assembly or computational process, as opposed to traditional nucleic acid assembly's requirement for a thermal annealing step, where the temperature is raised and then slowly lowered to ensure proper formation of the desired structure. They can also support catalytic functionality of the initiator species, where less than one equivalent of the initiator can cause the reaction to go to completion. Strand displacement complexes can be used to make molecular logic gates capable of complex computation. Unlike traditional electronic computers, which use electric current as inputs and outputs, molecular computers use the concentrations of specific chemical species as signals. In the case of nucleic acid strand displacement circuits, the signal is the presence of nucleic acid strands that are released or consumed by binding and unbinding events to other strands in displacement complexes. This approach has been used to make logic gates such as AND, OR, and NOT gates. More recently, a four-bit circuit was demonstrated that can compute the square root of the integers 0–15, using a system of gates containing 130 DNA strands. Another use of strand displacement cascades is to make dynamically assembled structures. These use a hairpin structure for the reactants, so that when the input strand binds, the newly revealed sequence is on the same molecule rather than disassembling. This allows new opened hairpins to be added to a growing complex. This approach has been used to make simple structures such as three- and four-arm junctions and dendrimers. DNA nanotechnology provides one of the few ways to form designed, complex structures with precise control over nanoscale features. The field is beginning to see application to solve basic science problems in structural biology and biophysics. The earliest such application envisaged for the field, and one still in development, is in crystallography, where molecules that are difficult to crystallize in isolation could be arranged within a three-dimensional nucleic acid lattice, allowing determination of their structure. Another application is the use of DNA origami rods to replace liquid crystals in residual dipolar coupling experiments in protein NMR spectroscopy; using DNA origami is advantageous because, unlike liquid crystals, they are tolerant of the detergents needed to suspend membrane proteins in solution. DNA walkers have been used as nanoscale assembly lines to move nanoparticles and direct chemical synthesis. Furthermore, DNA origami structures have aided in the biophysical studies of enzyme function and protein folding. DNA nanotechnology is moving towards potential real-world applications. The ability of nucleic acid arrays to arrange other molecules indicates its potential applications in molecular scale electronics. The assembly of a nucleic acid structure could be used to template the assembly of a molecular electronic elements such as molecular wires, providing a method for nanometer-scale control of the placement and overall architecture of the device analogous to a molecular breadboard. DNA nanotechnology has been compared to the concept of programmable matter because of the coupling of computation to its material properties. In a study conducted by a group of scientists from iNANO center and CDNA Center in Aarhus university (Aarhus), researchers were able to construct a small multi-switchable 3D DNA Box Origami. The proposed nanoparticle was characterized by AFM, TEM and FRET. The constructed box was shown to have a unique reclosing mechanism, which enabled it to repeatedly open and close in response to a unique set of DNA or RNA keys. The authors proposed that this "DNA device can potentially be used for a broad range of applications such as controlling the function of single molecules, controlled drug delivery, and molecular computing.". There are potential applications for DNA nanotechnology in nanomedicine, making use of its ability to perform computation in a biocompatible format to make "smart drugs" for targeted drug delivery. One such system being investigated uses a hollow DNA box containing proteins that induce apoptosis, or cell death, that will only open when in proximity to a cancer cell. There has additionally been interest in expressing these artificial structures in engineered living bacterial cells, most likely using the transcribed RNA for the assembly, although it is unknown whether these complex structures are able to efficiently fold or assemble in the cell's cytoplasm. If successful, this could enable directed evolution of nucleic acid nanostructures. Scientists at Oxford University reported the self-assembly of four short strands of synthetic DNA into a cage which is capable of entering cells and surviving for at least 48 hours. The fluorescently labeled DNA tetrahedra were found to remain intact in the laboratory cultured human kidney cells despite the attack by cellular enzymes after two days. This experiment showed the potential of drug delivery inside the living cells using the DNA ‘cage’. A DNA tetrahedron was used to deliver RNA Interference (RNAi) in a mouse model, reported a team of researchers in MIT. Delivery of the interfering RNA for treatment has showed some success using polymer or lipid, but there are limitations of safety and imprecise targeting, in addition to short shelf life in the blood stream. The DNA nanostructure created by the team consists of six strands of DNA to form a tetrahedron, with a single strand of RNA affixed to each of the six edges. The tetrahedron is further equipped with targeting protein, three folate molecules, which lead the DNA nanoparticles to the abundant folate receptors found on some tumors. The result showed that the gene expression targeted by the RNAi, luciferase, dropped by more than half. This study shows promise in using DNA nanotechnology as an effective tool to deliver treatment using the emerging RNA Interference technology.
Habitat protection and captive breeding programs have rebuilt Hawaii’s nēnē goose population from the brink of extinction in the mid-1900s to approximately 1,300 individuals in 2013. Still listed under the Endangered Species Act, the nēnē is also protected by collaborative programs with landowners designed to bring the goose to full recovery. American Peregrine Falcon The U.S. population of peregrine falcons dropped from an estimated 3,900 in the mid-1940s to just 324 individuals in 1975, and the falcon was considered locally extinct in the eastern United States. Their comeback has been truly remarkable—today, there are approximately 3,500 nesting pairs. El Segundo Blue Butterfly By 1984, only about 500 of these butterflies remained. The butterfly has rebounded significantly, with an astonishing 20,000 percent comeback recorded in 2012. The resurgence of the El Segundo blue butterfly is an inspiring story of the Endangered Species Act’s ability to protect critical habitat. Although it was once close to extinction, today the original Robbins’ cinquefoil population on a small, rugged site in New Hampshire’s White Mountains numbers about 14,000 plants, with 1,500 to 2,000 flowering individuals. In a remarkable win for the Endangered Species Act, Robbins’ cinquefoil was officially delisted in 2002. By the early 1960s, the count of nesting bald eagles plummeted to about 480 in the lower 48 states. Today, with some 14,000 breeding pairs in the skies over North America, the bald eagle endures as a testament to the strength and undeniable moral correctness of the Endangered Species Act. Southern Sea Otter Sea otters once numbered in the thousands before the fur trade and other factors reduced their numbers to about 50 in 1914. Listed under the Endangered Species Act in 1977, this remarkable species rebounded to approximately 2,800 individuals between 2005 and 2010. The whaling industry dramatically depleted humpback populations from a high of more than 125,000; by the mid-1960s, only 1,200 individuals swam in the North Pacific. That tiny population of humpbacks has swelled to more than 22,000 members today due to a strong recovery program implemented under the Endangered Species Act. By the 1950s, the American alligator had been hunted and traded to near-extinction. Captive breeding and strong enforcement of habitat protections and hunting regulations have contributed to its resurgence. Alligators now number around 5 million from North Carolina through Texas, with the largest populations in Louisiana and Florida. Brown pelicans were dramatically impacted by habitat destruction and DDT. Driven to extinction in Louisiana, pelicans have made a dramatic comeback under the Endangered Species Act; in 2004, the population in Louisiana numbered 16,500 nesting pairs. Thanks to ambitious reintroduction programs, the brown pelican was fully delisted 2009. Green Sea Turtle In 1990, fewer than fifty green sea turtles were documented nesting at the Archie Carr National Wildlife Refuge on Florida’s east coast. This 20-mile stretch of beach hosted more than 10,000 green sea turtle nests in 2013, making this one of the greatest conservation success stories of our time.
A Rubik's Cube consists of 27 smaller cubes. What happens if you glue 8 of those together to form a 2x2x2 cube? With Rubik's, you can also get 6 cubes: 8,8,8,1,1,1 and 13 cubes: 8,8,1,1,1,1,1,1,1,1,1,1,1 You can easily "make up" these questions; like: "How can a cube be cut into 5 cubes (not necessarily the same size)?" First, divide the cube in 64 smaller cubes; stick 27 together twice, 8 together once; you have: 27,27,8,1,1
As a pot of water is heated and its temperature goes up, more and more water vapor is produced above the surface. That’s because more and more of the surface molecules gain enough energy to leap off into the air. The increasing amount of water vapor carries off an increasing amount of energy that could otherwise go into raising the water’s temperature. Moreover, the closer the water gets to its boiling temperature, the more energy each water vapor molecule carries off, so the more important it becomes not to lose them. A pot lid partially blocks the loss of all those molecules. The tighter the lid, the more hot molecules are retained in the pot and the sooner the water will boil. Your point, that a lid increases the pressure inside the pot as in a pressure cooker, thereby raising the boiling point and delaying the actual boiling, is correct in theory but insignificant in reality. Even a tightly fitting, hefty one-pound lid on a ten-inch pot would raise the pressure inside by less than a tenth of a percent, which would in turn raise the boiling point by only four hundredths of a degree Fahrenheit. You could probably delay the boiling longer by watching the pot.
Star-birth pillar in the Carina Nebula This is a pillar of gas and dust within which stars are forming. The stars are hidden behind the dust (top image), but one star is providing evidence of its existence through a jet that is visible to the right and left of the pillar. The star and its jets are seen more clearly in the near-infrared image, at bottom. The Carina Nebula is found in the constellation Carina, in the Southern Hemisphere. |Distance from Earth|| The pillar is three light-years in length; the total length of the jet is about 10 light-years. The jets extend farther than can be seen in the infrared view. Their length is suggested by wispy clouds in the visible light image. "Fast Facts: Pillar in the Carina Nebula" is a table that lists the name, location, size, and distance of the nebula from Earth. Images of the pillar in both visible and near-infrared light are included. Use this resource as: A source of information. Read the table to find out about this object. A mapping activity. Locate the nebula's associated constellation on a star map. A large-number recognition activity. Have students look at several Fast Fact tables, including this one. Ask them to place the objects described in the Fast Fact tables in order, starting with the object closest to Earth and ending with the one farthest away. An inquiry tool. Have students write down questions they would like answered about the image and the information in the Fast Facts table. An engagement tool. Involve students in a discussion.
By J. Michael Checkett Have you ever heard that you can tell where a mallard is from by the color of its feet? As the story goes, the legs and feet of northern mallards are redder than their southern cohorts because low temperatures in higher latitudes cause more blood to flow to the birds' extremities. These mallards are also thought to be larger and hardier than mallards raised in southern parts of the species' range. Old-timers called these big, late-migrating mallards "redlegs." In reality, the brightly colored feet and bills of mallards and other ducks are caused by changes in hormone levels during late fall and winter while the birds are pairing. The feet of both male and female mallards turn bright orange—almost red—in December and January as they go through courtship and pairing. Heavier adult mallards typically develop breeding plumage and display brightly colored feet earlier than younger, lighter birds, giving rise to the mistaken belief that "redlegs" are a different race or subpopulation of larger mallards. In summer, hormone levels in ducks decrease, and their feet and bills become drab in color again, which helps camouflage the birds while nesting and molting. Of course, the legs and feet of waterfowl play a vital role in many other important activities, including locomotion (walking, swimming, and flying) and thermoregulation (maintaining body temperature). Features such as webbing of the feet arose over time as the birds adapted to make the most of their wetland environments. For example, researchers recently discovered that while swimming, waterfowl push both backward and downward with each stroke of their webbed feet. This provides a combination of lift and thrust, propelling the birds through the water with remarkable speed and efficiency. The feet of water birds are all structurally similar but vary among species. The most common difference is in the amount of webbing between the birds' toes. Cormorants and boobies have totipalmate feet, where all four of the birds' toes are connected by webs. Ducks and geese have palmate feet, where only the three front toes are webbed and the hind toe (called the hallux) is small and elevated. Coots have lobate feet, where the toes have a series of webbed lobes that open when the foot is pushed backwards—much like the base of a push pole used by duck hunters to traverse the marsh. Lastly, some waterfowl such as the Australian magpie goose and the Hawaiian goose (or nene) have half-webbed semipalmate feet, an adaptation that is useful for occasional swimming and walking on soft surfaces. The legs and feet of waterfowl also play an important role in maintaining body temperature. Ever wonder how a mallard can stand comfortably on ice? A unique heat-exchange system in the birds' legs known as counter-current circulation makes this possible. The large, flat feet of waterfowl are natural radiators, so to minimize heat loss, the arteries and veins in the birds' legs work in tandem to retain heat. Arteries supplying blood to the feet pass alongside the veins removing blood. The warm arterial blood flowing to the feet is cooled by venous blood flowing back to the body where it is warmed again. Consequently, very little of a duck’s body heat is lost through its extremities. Thus, while the core body temperature of a duck standing on ice is near 100 degrees Fahrenheit, the temperature of the bird’s feet may be just above freezing. To further conserve heat in cold weather, waterfowl reduce the volume of blood flowing to their feet by constricting blood vessels in their legs. Experiments have shown that waterfowl gradually reduce blood flow to their feet as the air temperature drops to 32 degrees Fahrenheit (the freezing point). When temperatures fall below freezing, however, waterfowl again increase blood flow to their feet to prevent tissue damage. The birds also protect their feet by drawing them into their flank feathers and close to their body. To further minimize exposure in bitter cold weather, waterfowl often stand on one leg at time, tucking the other leg into their body feathers to protect it from the elements. In a similar but reverse manner, waterfowl can release excess body heat through their feet, primarily by standing or swimming in water that is cooler than the air. This capability helps waterfowl avoid heat stress on long, hot summer days. Where legs and feet are positioned on the bodies of waterfowl also influences how the birds interact with their environment. In dabbling ducks and geese, the legs are located near the middle of the body, providing the birds with good balance for standing and walking. This offers many advantages, including the ability to feed on dry land and in very shallow water, nest in upland habitats, and spring almost vertically into flight to escape predators. The feet of diving ducks are located near the back of their body. This makes walking difficult, but is beneficial for diving and swimming. These adaptations allow diving ducks to frequent large bodies of water and feed by diving, often at considerable depths. Their excellent diving and swimming abilities also help them escape predators. The trade-off is that diving ducks can’t spring vertically into flight like dabbling ducks and must instead make a running start across the water to achieve flight speed. A final activity where the feet of waterfowl play an important role is flight. All waterfowl use their feet as rudders while flying. And as all waterfowl hunters have seen, ducks and geese lower their feet and spread the webbing between their toes right before they land. This creates a little extra drag that helps the birds slow down. Conversely, when waterfowl want to achieve maximum flight speed and efficiency, they pull their feet into their flank feathers just like retractable landing gear on an airplane. In most circumstances, webbed feet have been wonderful adaptations that assist waterfowl in exploiting the wetland habitats where they live. Millions of years of adaptation have helped ducks, geese, and other water birds truly put their best foot forward.
Eruptive History of East Mauiin a nutshell The early history of volcanism on East Maui is buried beneath innumerable lava flows thousands of meters thick. Hawaiian volcanoes, however, follow an overall pattern of eruptive growth and decline. The accompanying diagrams track the growth of East Maui through time. Stage 1 is sometimes referred to in more detail as the pre-shield alkalic stage. The only example we have of such volcanism is at Lo`ihi , a newly growing submarine volcano that lies southeast of the Island of Hawai`i. It is unknown whether east Maui or other volcanoes of the chain must go through a pre-shield alkalic stage. The lava flows of stage 1, if present, was subsequently buried by products of succeeding 2 is the shield-building stage. Over 95 percent of a Hawaiian volcano's volume is emplaced during shield building, during a period that may span about 600,000 years. The Earth's crust, unaccustomed to the load of the volcano, subsides greatly during this stage--as much as 3 mm per year using current subsidence rates from the Island of Hawai`i as a guide. Early eruptions are entirely underwater, but the rate of upbuilding exceeds the rate of subsidence. The volcano grows to reach the ocean surface and becomes an island about midway through its shield-building years, after about 300,000 years. East Maui volcano, we see the final lava flows of the shield-building stage in exposures along the north shore of the island from Honomanu Stream eastward to Nahiku. For convenience of discussion, geologists call these flows the Honomanu Basalt, naming the sequence for a site where the lava flows are exceptionally well exposed. To imagine what the shield looked like, we must consider the shape of Mauna Loa and Kilauea on the Island of Hawai`i. That's because East Maui's Honomanu shield is obscured by younger rocks. third volcanic stage is the capping or post-shield alkalic stage. East Maui entered its capping stage about 900,000 years ago. As the cross-sectional figure suggests, this stage produces lava flows that mantle much of the preexisting surface. But the rocks form only a small part of the total volume of the island, about one percent. Clearly the rate of volcanism diminishes greatly in the post-shield alkalic stage. At East Maui, strata in the main part of this stage have been grouped into the Kula Volcanics, named for the upcountry town. In most places, lava flows of the Kula Volcanics extend from the coast to the summit area of Haleakala, where they are well exposed in the walls of Haleakala Crater. The ages obtained from Kula volcanic rocks indicate they span the period from 950,000 to 150,000 years ago. Other volcanoes elsewhere in the Hawaiian Islands that are currently in the post-shield alkalic stage are Hualalai Kea on the Island of Hawai`i. Newly determined isotopic ages show that East Maui persists in the postshield stage as its eruptive vigor wanes. Strata in the youngest part of the postshield stage have been named the Hana Volcanics, after the town. Representative products include the young cinder cones and lava flows that blanket the floor of Haleakala Crater. Equally young lava flows and cinder cones continue southwest and east along the major rift zones of the volcano. The east rift zone extends into the ocean at the village of Hana, ending a short distance eastward. East Maui was once thought to have already entered the fourth volcanic stage, the rejuvenated or renewed volcanism stage. Lengthy periods of erosion may precede or be interspersed with eruptions of the renewed volcanism stage. Recent eruptive products from Ko`olau volcano on the island of O`ahu are classic examples of rejuvenated-stage Subsequent stages, entirely nonvolcanic and not portrayed here, encompass the changes that bring the volcanic islands back to low eroded atolls and finally, when fully drowned, to subsea plateaus known as seamounts. For a glimpse of how the volcanoes of big islands become submerged to form numerous smaller islands, examine a bathymetric map of the State of Hawai`i. The URL of this page is Updated: 13 February 2003 (srb)
Photo courtesy of Vanishing Georgia Collection, Georgia Archives. Oral history can be defined as an attempt to create primary source material by conducting interviews with people who can relate their first-hand experiences about a topic of historical interest. Oral history shares with oral tradition the fact that stories are told through the spoken word before they are written down. These guidelines explain oral histories and how to collect them. Suggested Classroom Activities: These activities use GPS connections.
olfactory receptor, also called smell receptor, protein capable of binding odour molecules that plays a central role in the sense of smell (olfaction). These receptors are common to arthropods, terrestrial vertebrates, fish, and other animals. In terrestrial vertebrates, including humans, the receptors are located on olfactory receptor cells, which are present in very large numbers (millions) and are clustered within a small area in the back of the nasal cavity, forming an olfactory epithelium. Each receptor cell has a single external process that extends to the surface of the epithelium and gives rise to a number of long, slender extensions called cilia. The cilia are covered by the mucus of the nasal cavity, facilitating the detection of and response to odour molecules by olfactory receptors. In arthropods, olfactory receptors are located on feelerlike structures such as antennae. Within the cell membrane, olfactory receptor proteins are oriented in such a way that one end projects outside the cell and the other end projects inside the cell. This makes it possible for a chemical outside the cell, such as a molecule of an odorant, to communicate with and produce changes in the cellular machinery without entering the cell. The outer and inner ends of receptor proteins involved in smell are connected by a chain of amino acids. Because the chain loops seven times through the thickness of the cell membrane, it is said to have seven transmembrane domains. The sequence of amino acids forming these proteins is critically important. It is thought that stimulation occurs when a molecule with a particular shape fits into a corresponding “pocket” in the receptor molecule, rather as a key fits into a lock. A change in a single amino acid can change the form of the pocket, thus altering the chemicals that fit into the pocket. For example, one olfactory receptor protein in rats produces a greater response in the receptor cell when it interacts with an alcohol called octanol (eight carbon atoms) rather than with an alcohol known as heptanol (seven carbon atoms). Changing one amino acid from valine to isoleucine in the fifth transmembrane domain, which is thought to contribute to the shape of the pocket, alters the receptor protein in such a way that heptanol, instead of octanol, produces the greatest effect. In mice the equivalent receptor is normally in this form, producing a greater response to heptanol than to octanol. This illustrates the importance of amino acid molecules in determining the specificity of receptor cells. When a receptor protein binds with an appropriate chemical (known as a ligand), the protein undergoes a conformational change, which in turn leads to a sequence of chemical events within the cell involving molecules called second messengers. Second-messenger signaling makes it possible for a single odour molecule, binding with a single receptor protein, to effect changes in the degree of opening of a large number of ion channels. This produces a large enough change in the electrical potential across the cell membrane to lead to the production of action potentials that convey information to the animal’s brain. There are about 1,000 genes in the olfactory gene family, the largest known family of genes. (Although humans possess all 1,000 olfactory receptor genes, making up roughly 3 percent of the entire human genome, only about 350 of these genes encode working olfactory receptors.) Since each gene produces a different odour receptor protein, this contributes to the ability of animals to smell many different compounds. Animals not only can smell many compounds but can also distinguish between them. This requires that different compounds stimulate different receptor cells. Consistent with this, evidence indicates that only one olfactory gene is active in any one olfactory receptor cell. As a consequence, each receptor cell possesses only one type of receptor protein, though it has many thousands of the particular type on the membrane of the exposed cilia of the cell. Since each cell expresses only one type of receptor protein, there must be large numbers of cells expressing each type of receptor protein to increase the likelihood that a particular odour molecule will reach a cell with the appropriate receptor protein. Once the molecule reaches the matching receptor, the cell can respond.
Now let’s begin with the topic of Normalization: it’s a series of rules that a database must comply to store data to perform efficiently. This process of implementation of the rules referred as Normalization. These rules are called normal form. Different types: (main objective is to remove duplication and making data more redundant) 1st Normal Form: A table is in first normal form if all the key attributes have been defined and it contains no repeating groups. Table is having multiple values in the table in subject studied. So we convert this in columns thus each column is having single value. Now we have a problem as each column is having repeated values of subject studied and secondly, if we want to add new subject we cannot enter the value unless we change the table structure with addition of new column. This leads to waste of memory resources as we need to define new column width and range of cells and also, we need to give admin. rights to the person making entry to the table. Even with the same data we see there are missing values. Hence we could say a lot of memory resources are wasted(think we have have larges chunk of data). So we wil keep the non-redundant information like student_id, name, phone( also we could divide the name in first name and last name) 2nd Normal Form: A table is in second normal form (2NF) if and only if it is in 1NF and every non key attribute is fully functionally dependent on the whole of the primary key (i.e. there are no partial dependencies). In the above example in subject table we see subject is repeated many times and subject name does not depend full upon any particular id. Thus below mention table are created. 3rd Normal Form: A table is in third normal form (3NF) if and only if it is in 2NF and every non key attribute is non-transitively (means direct) dependent on the primary key (i.e. there are no transitive dependencies) - Anomalies can occur when a relation contains one or more transitive dependencies. - A relation is in 3NF when it is in 2NF and has no transitive dependencies. - A relation is in 3NF when ‘All non-key attributes are dependent on the key, the whole key and nothing else but the key’. We could get a easy reference that a person living in Delhi will be India and same with Gurgaon. City is sub-set of Country. Set theory is important in 3NF process. Boyce-Codd Normal Form: A table is in Boyce-Codd normal form (BCNF) if and only if it is in 3NF and every determinant is a candidate key. General Knowledge: Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a “search” facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks. He also introduces the topic of normalization. Reference for us: http://www.youtube.com/watch?v=U-F_fRJ_YTQ De-normalization: well as we have seen how important normalization is same way we need to take a step back sometime. most common reason is historical data. Historical data is mainly dependent on the like invoice and pricing. We have divided the pricing into a different table while normalization. What if the price change in the last month yet we need to calculate the tax based on the old pricing based in same financial year.
Around 505 million years ago, a spiky worm with clawed legs crawled around the ocean floor in what is today the Rocky Mountains in Canada. From the beginning, it was an oddity. As Phys.org writes, "The spines along the creature's back were originally thought to be legs, its legs were thought to be tentacles along its back, and its head was mistaken for its tail." In honor of its far-out looks, the paleontologist who discovered it in the 1970s named it Hallucigenia, the New Scientist reports. A group of modern animals, however, still echoes those outlandish traits. Researchers just discovered that Hallucigenia does indeed have still-living relatives. That ancient worm's descendents are the velvet worms, a group of animals that live in tropical forests. Researchers magnified fossilized Hallucigenia specimens 1,000 times to reveal that their claws contained multiple layers, "stacked one inside the other like a conical onion," the authors describe in The Conversation. Those claws are reminiscent of the jaws of modern velvet worms, they realized. When they compared the anatomy and the known family tree of velvet worms to Hallucigenia, they realized that they seemed to have a modern day match. "Hallucigenia was not an evolutionary dead end," they conclude on The Conversation. "Rather, it represents an early pit stop on the way to the velvet worm body plan, which arose gradually over time."
Primary progressive aphasia (PPA) is a form of cognitive impairment that involves a progressive loss of language function. Language is a uniquely human faculty that allows us to communicate with each other through the use of words. Our language functions include speaking, understanding what others are saying, repeating things we have heard, naming common objects, reading and writing. “Aphasia” is a general term used to refer to deficits in language functions. PPA is caused by degeneration in the parts of the brain that are responsible for speech and language. PPA begins very gradually and initially is experienced as difficulty thinking of common words while speaking or writing. PPA progressively worsens to the point where verbal communication by any means is very difficult. The ability to understand what others are saying or what is being read also declines. In the early stages, memory, reasoning and visual perception are not affected by the disease and so individuals with PPA are able to function normally in many routine daily living activities despite the aphasia. However, as the illness progresses, other mental abilities also decline. Adults of any age can develop PPA, but it is more common in people under the age of 65. People with PPA can have a variety of different language symptoms and no two cases are exactly the same. People with PPA can experience many different types of language symptoms. In many instances, the person with PPA may be the first to note that something is wrong and the complaints may initially be attributed to stress or anxiety. People with PPA initially experience one or more of the following symptoms: - Slowed or halting speech - Decreased use of language - Word-finding hesitations - Sentences with abnormal word order in speech or e-mails - Substitution of words (e.g., “table” instead of “chair”) - Using words that are mispronounced or incomprehensible (e.g., “track” for “truck”) - Talking around a word (e.g., “We went to the place where you can get bread” for the words “grocery store”) - Difficulty understanding or following conversation despite normal hearing - Sudden lapse in understanding simple words - Forgetting the names of familiar objects - Inability to think of names of people, even though the person is recognized - Problems writing (e.g. difficulty writing checks or notes) - Problems reading (e.g difficulty following written directions or reading signs) - New impairments in spelling - Problems in arithmetic and calculations (e.g. making change, leaving a tip) People with PPA tend to have similar clusters of symptoms. Researchers who specialize in PPA currently recognize three subtypes: agrammatic, logopenic and semantic. PPA-G (Agrammatic/Nonfluent Subtype): A problem with word-order and word-production Speech is effortful and reduced in quantity. Sentences become gradually shorter and word-finding hesitations become more frequent, occasionally giving the impression of stammering or stuttering. Pronouns, conjunctions and articles are lost first. Word order may be abnormal, especially in writing or e-mails. Words may be mispronounced or used in the reverse sense (e.g., “he” for “she” or “yes” for “no”). Word understanding is preserved but sentence comprehension may suffer if the sentences are long and grammatically complex. PPA-L (Logopenic Subtype): A problem with word-findingIn contrast to PPA-G, speech is fluent during causal small talk but breaks into mispronunciations and word-finding pauses when a more difficult or precise word needs to be used. Some people with PPA-L are very good at going around the word they cannot find. They learn to use a less apt or simpler word as well as to insert fillers such as “the thing that you use for it,” “you know what I mean,” or “whatchamacallit”. Spelling errors are common. The naming of objects becomes impaired. Understanding long and complex sentences can become challenging but the comprehension of single words is preserved. PPA-S (Semantic Subtype): A problem with word-understanding The principal feature is a loss of word meaning, even of common words. When asked to bring an orange, for example, the person may appear puzzled and may ask what an “orange” means. Speech has very few nouns and is therefore somewhat empty of meaning. However, it sounds perfectly fluent because of the liberal use of fillers. The person may seem to have forgotten the names of familiar objects. PPA arises when nerve cells in language-related parts of the brain malfunction. The underlying diseases are called “degenerative” because they cause gradually progressive nerve cell death that cannot be attributed to other causes such as head trauma, infection, stroke or cancer. There are several types of neurodegeneration that can cause PPA. The two most commonly encountered types are frontotemporal lobar degeneration (FTLD) and Alzheimer’s disease (AD). Both FTLD and AD can lead to many different patterns of clinical impairments, depending on the region of the brain that bears the brunt of the nerve cell loss. When AD or FTLD attacks the language areas (usually on the left side of the brain), PPA results. PPA is caused by AD in approximately 30-40% of cases and by FTLD in approximately 60-70% of cases. In contrast, PPA is a very rare manifestation of AD. In the vast majority of patients with AD, the most prominent clinical symptom is a memory loss for recent events (amnesia) rather than an impairment of language (aphasia). PPA is therefore said to be an “atypical” consequence of AD. The logopenic type of PPA has a particularly high probability of being caused by AD. Specialized positron emission tomography (PET) scans and examination of the spinal fluid may help to resolve the distinction between the two underlying diseases. Whether or not PPA is caused by AD or FTLD can be determined definitively only at autopsy through examination of brain tissue with a microscope. This can be confusing because for reasons outlined in the previous paragraph, the word “Alzheimer’s” can be used in two different ways. The term Alzheimer’s dementia (or Dementia of the Alzheimer-Type) is used to designate a progressive loss of memory leading to a more generalized loss of all cognitive functions. The term Alzheimer’s disease (as opposed to Alzheimer’s dementia) is used in a different way to designate a precise pattern of microscopic abnormalities in the brain. Sometimes these abnormalities become concentrated in language areas (instead of memory areas) of the brain and become the cause of PPA. So, while PPA patients don’t have Alzheimer’s dementia, 30-40% may have an atypical form of Alzheimer’s disease. This dual use of the word “Alzheimer’s” is confusing, even for the specialist, but is a feature of medical nomenclature that is here to stay. In the vast majority of individuals, PPA is not genetic. However, in a small number of families, PPA can be caused by hereditary forms of FTLD. The most common gene implicated in these families is the progranulin gene (GRN). Other, less common genes implicated in FTLD include the microtubule associated protein tau (MAPT) and a newly discovered gene, chromosome 9 open reading frame 72 (C9ORF72). Even in families with genetic mutations, one family member may have PPA while others may have behavioral variant frontotemporal degeneration (bvFTD) or movement disorders, including corticobasal degeneration (CBD) or progressive supranuclear palsy (PSP). In the presence of a genetic mutation, up to 50% of all family members will have FTLD. Therefore, genetic testing is not usually recommended unless several family members have clinical patterns characteristic of PPA, bvFTD, CBD or PSP. Before proceeding with genetic testing, it’s necessary to meet with a genetic counselor to review the implications of the results. The immediate purpose of genetic testing is to determine whether the person has a mutation that is responsible for the disease. However, the results have profound implications for family members who are healthy, especially those of child-bearing age. Do family members want to know the presence of a genetic disease for which there is no treatment? Do they realize that a negative result does not rule out the presence of a mutation in another gene not covered by the testing? Genetic testing for clinical purposes is a serious step that should not be initiated lightly. Because PPA is progressive, decline in language ability continues. Additionally, some non-language abilities (memory, attention, judgment or changes in behavior and personality) can be affected. Disinhibited, inappropriate behaviors (also seen in behavioral variant frontotemporal degeneration) are more common with PPA-S while impairments in problem solving, multi-tasking movement and mobility (of the type seen in CBD and PSP) are more common in PPA-G. The rate of decline is variable from person to person and unfolds over many years. It is unclear why some people progress more rapidly than others. A thorough evaluation of PPA includes the following: - History: First, a careful history is taken to establish that a condition of dementia exists. This often requires that family members or friends be questioned about the patient’s behavior because sometimes the patient is unaware of the symptoms (as in the case of memory loss or personality changes) or may be unable to describe them due to aphasia. - Neurological Examination: A neurological examination is done to determine if there are signs of dementia on a simple screening of mental functions (the mental status examination) and also if there are signs of motor or sensory symptoms that indicate other types of neurological disorders might be causing the dementia. The neurologist will also order tests (e.g., blood tests, spinal tap, brain imaging studies) to further investigate the cause of the symptoms. - Neuropsychological Examination: A neuropsychological examination provides a more detailed evaluation of mental functioning. This is especially important in the very early stages of illness when a routine screening evaluation may not detect the problems the patient is experiencing. This requires several hours and consists of paper-and-pencil or computer-administered tests of mental abilities, including attention and concentration, language, learning and memory, visual perception, reasoning and mood. The results can indicate if there are abnormalities of thinking and behavior and also their degree–mild, moderate or severe. It is often difficult to demonstrate that individuals with PPA have intact memory since we usually test memory by telling a person some information and then asking them to repeat it later on. In an individual with PPA, it may be impossible to repeat back the information because of the aphasia. Therefore, it is important that testing is done properly to make sure that there is not a true loss of memory. - Speech and Language Evaluation: Since a decline in language abilities is the primary symptom of PPA, it is important to determine which components of language use are most affected, how severely affected they are, and what can be done to improve communication. A Speech-Language Pathologist evaluates different aspects of language in detail and can make recommendations for strategies to improve communication. Family members should be included in the treatment sessions to educate them about how to facilitate communication. - Psychosocial Evaluation: PPA affects not only the individual who is suffering from this disorder, but also all people who are close to the patient. The disorder has an impact on relationships, the ability to continue working, the ability to perform many routine duties, and the ability to communicate even the simplest of needs. Although there are many resources available for individuals with memory loss, there are relatively fewer appropriate resources for individuals with PPA, their relatives and friends. Evaluation with a social worker who is familiar with PPA can address these issues and provide suggestions for dealing with day-to-day frustrations and problems. - Brain Imaging Studies: The evaluation for dementia also includes a brain imaging study. This is done in the form of a computed axial tomography scan (CAT scan) or a magnetic resonance imaging scan (MRI scan). Both of these methods provide a picture of the brain so that any structural abnormalities, such as a stroke, tumor or hydrocephalus–all of which can give rise to dementia-like symptoms, can be detected. In the case of degenerative brain disease, the CAT scan and MRI scan may show “atrophy,” which suggests a “shrinkage” of the brain tissue. However, especially in early stages, they may not show anything. In fact, the report often comes back “normal.” But this only means that there is no evidence for a tumor or stroke. It cannot tell us anything about the microscopic degenerative changes that have occurred. - Psychiatric Evaluation: Sometimes there will also be a need for a psychiatric evaluation. This may be the case when it is not clear if the changes in behavior are due to depression or another psychiatric disturbance. Also, some individuals, especially those with PPA, may become saddened by their condition and may require treatment for depression. There are many thousands of people with PPA. Nonetheless, compared to the millions of patients with Alzheimer-type amnestic dementias, PPA is rare. Furthermore, it can start in a person’s 40s and 50s, an age range that physicians do not usually associate with neurodegenerative diseases. Therefore, some people with PPA often see multiple doctors and receive many different diagnoses before receiving the diagnosis of PPA. There are no pills yet for PPA. Because of the 30%-40% probability of Alzheimer’s disease (AD), some physicians will prescribe AD drugs such as Exelon (rivastigmine), Razadyne (galantamine), Aricept (donepezil) or Namenda (memantine). None have been shown to improve PPA. Medicine is also sometimes prescribed to manage behavioral symptoms such as depression, anxiety, or agitation, which may occur later in the course of the illness. There are, however, life-enriching interventions and speech therapies that can help improve a diagnosed person’s quality of life. The primary goal of treatment for language impairments in individuals with PPA is to improve the ability to communicate. Because the type of language problems experienced by patients with PPA may vary, the focus of treatment for improving communication ability will also vary. A complete speech and language evaluation provides the information needed to determine the type of treatment that is most appropriate. There are two basic approaches to speech therapy for PPA. One approach is to focus treatment directly on the language skills that are impaired (for example, skills to enhance word-retrieval abilities), and the other is to provide augmentative/alternative communication strategies or devices. We recommend that both treatment approaches be used in people with PPA. Regardless of which strategies are provided to people with PPA, it is important that the family is involved in treatment and that the use of the strategy in the natural environment is encouraged. Resources for PPA - IMPPACT, the International PPA Connection: www.ppaconnection.org. This website has been launched to foster international collaboration in PPA and also to serve as a compendium of patient-care resources related to PPA throughout the world. - The Association for Frontotemporal Degeneration: www.theaftd.org - The National Aphasia Association: www.aphasia.org - University of California, San Francisco Memory and Aging Center: http://www.memory.ucsf.edu/education/diseases/ppa © 2014 The Regents of the University of California
Html frameset tag divide the webpage into multiple frames. The frameset tag has capability to divide the webpage in rows and columns which are the attributes of frameset tag. As discussed earlier, every attribute has some value in case of rows and columns attribute the value can be set in pixels and percentage which tell the browser how much portion is assigned to each frame in the frame set. The frameset at minimum contain two frames and each frame show different webpage. it is important to note that may be future version of HTML will support frame or not because using frame in Web pages is not considered as a good practice. The <THEAD>, <TBODY> and <TFOOT> tags are always used in conjunction. These tags define proper structure of table same like in Microsoft word we can divide the document in header, body and footer same can be done in HTML using THEAD>, <TBODY> and <TFOOT> tags. The major objective to divide the table into three section is to handle the scrolling of large tables in browsers. After applying these tags on table the header and footer part will be non-scrollable which makes it easy for the user to read and understand the table. Every book you read has name through which you get an idea what the book same like books in website or blogs captions are employed to define and explain the table contents. The <CAPTION> tag is place right after the <TABLE> tag, the <CAPTION> tag should be always closed with the pair tag </CAPTION>. The table caption can be placed on top or bottom of the table with the help of align attribute. The table height attribute is used to set the height of the table. The height attribute take value in percentage or pixels, it is recommended to use the value in percentage because pixels setting varies system to system. The HTML code of line to change the height of table in percentage value look like this: <table height=”50%”> The HTML code of line to change the height of table in pixels value look like this: <table height=”50”> or <table height=”50px”> Let’s see the usage of height attribute in the this HTML code example. The <th> tag in html is similar as <td> tag, the only difference is <td> tag is used to create a table cell on the other hand <th> is used to define table header. The data between <th></th> tags is bold and centered by default. Below is the html code example to show the usage of <th> tag in table. When you create table in HTML you have observed that table cell takes place to adjust the date. The height attribute take the value in percentage or pixels to adjust the height of the cell. The height cell cannot be less than the height of the date within the cell on the other hand height could be increased by increasing the value, higher the number in percentage or pixels more will be the height of the table cell. it is important to note that change in height of a cell will also change the height of adjacent cell. Let’s look at the HTML code in which the height attribute is used to change the formatting of table.
Chapter Ten takes a look at the epidemic of childhood aggression and its etiology. The authors start this chapter by pointing out not only the number of rising incidents of violence, the fear adults have in confronting gangs of children or teenagers that was unheard of in the past and the violence of teenagers against each other. They also point out that aggression is not only limited to attacking each other, but also includes attacking oneself through self-deprecating remarks, self-hostility, self-harm and suicidal thoughts and impulses. The key to these unlocking the reason behind these behaviors, the authors contend, is to understand the frustration of unmet attachment needs. “There are many triggers for frustration, but because what matters most to children- as to many adults- is attachment, the greatest source of frustration is attachments that do not work: loss of contact, thwarted connection, too much separation, feeling spurned, losing a loved one, a lack of belonging or of being understood.” When peers replace parents, frustrations mount even higher for a variety of reasons discussed within the chapter. On page 133 the authors write that despite frustration, “it is not a given that frustration must lead to aggression” (which, by the way, I am so glad the authors put that in there because that was exactly what I was thinking!) They go on to say, “The healthy response to frustration is to attempt to change things. If that proves impossible, we can accept how things are and adapt creatively to a situation that cannot be changed. If such adaptation doesn’t occur, the impulses to attack can still be kept in check by tempering thoughts and feelings – in other words, by mature self-regulation.” A part of this chapter is subtitled “How Peer Orientation Foments Aggression” and cites three ways peer orientation contributes to aggression. Overall, peer orientation seems to dilute a child’s natural apprehensiveness and caution. Emotional self-numbing is a goal of many peer oriented children and combined with the intake of alcohol can lead to aggression. Chapter Eleven is entitled “The Making of Bullies and Victims” and begins with the thought that whilst bullying has always been around, it has recently reached epic proportions in that a quarter of all US middle-school children (grades 6,7,8 for my foreign readers)were either perpetrators or victims of bullying. The authors cite the lack of adult attachment for these children and note bullying can be reproduced in animal studies where the generational hierarchy is destroyed. One of the studies the authors cite involve a group of monkeys in which they are separated from adults and raised by each other with the result being self-destructive and aggressive behavior. The authors distinguish that some children are “psychologically set” to become bullies before peer orientation sets in. They look at situations that may foster a child’s longing and drive to be dominant over peers in the absence of attachment, including: - The child was hurt or abused whilst in a dependent role. - The parent has failed to give the child a secure sense that there is a “competent, benign, powerful” adult in charge. - The parents has failed to attach to the child. - The parent puts the child in charge and in the lead and “looking to them for cues how to parent.” - The parent does everything possible to make everything work for the child in order to avoid upsetting the child. - The parents gives many choices and explanations “when what the child really needs is to be allowed to express his frustration at having some of his desires disappointed by reality, to be given latitude to rail against something that won’t give.” - Parents are not present for children due to being preoccupied with stress. - Parents are too passive, too needy or too uncertain to “assert their dominance” and the children move into the position of being dominant. The authors also have an intriguing section in this chapter on “The Unmaking Of A Bully” in which they assert that “the bully’s only hope is to attach to some adult who in turn is willing to assume the responsibility for nurturing the bully’s emotional needs.” I will stop there but encourage those of you reading along with me to leave a comment as to what you thought about these two chapters…
The ancient Indians knew writing at least as early as 2500 Be, but no manuscripts older than the 4th century are available. The manuscripts were written on birch bark and palm leaves, but in Central Asia, where Prakrit had gone from India, manuscripts were also written on sheep leather and wooden tablets. The Vedas and related books were put into writing quite late. The Rig Veda describes the period 1500-1000 Be, and the later Vedic literature gives glimpses of the history of about 1000-600 Be. Buddhist literature, the two epics, Ramayana and Mahabharata, and other books help us to know about the subsequent periods. We mainly rely on literary sources for the history of India just before the Mauryas. Later, literary sources began to supplement other sources. The Puranas are regarded by some as having been written historically, though this view is disputed by other scholars. However, generf}lly the first 'historical' writing by an Indian is attributed to Kalhana who wrote the Rajatarangini in the twelfth century, giving a dynastic chronicle of the kings of Kashmir. Some important ancient works that are important source materials include Asvaghosa's Buddhacharita (AD 100) in Pali, the Gaudavaho in Prakrit by Bappaira which talks .of King Yasovarman (AD 750), and the Harshacharita by'Bana which is an account on the life of King Harsha (AD 60647). The Sangam literature gives an insight into the social, economic and political life of the people of deltaic Tamil Nadu in the early Christian centuries. Its information regarding trade and commerce of the time is attested to by foreign accounts and archaeological finds.
Purdue University researchers devised an implantable device they hope will one day predict the onset of epileptic seizures, and hopefully stop them with proper neuron stimulation. They have developed a tiny transmitter three times the width of a human hair to be implanted below the scalp to detect the signs of an epileptic seizure before it occurs. The system will record neural signals relayed by electrodes in various points in the brain. “When epileptics have a seizure, a particular part of the brain starts firing in a way that is abnormal,” said Pedro Irazoqui, an assistant professor of biomedical engineering. “Being able to record signals from several parts of the brain at the same time enables you to predict when a seizure is about to start, and then you can take steps to prevent it.” Data from the implanted transmitter will be picked up by an external receiver, also being developed by the Purdue researchers. The transmitter consumes 8.8 milliwatts, about one-third as much power as other implantable transmitters while transmitting 10 times more data, and can collect data related to epileptic seizures from 1,000 channels, or locations in the brain. The electrodes that pick up data will be inserted directly in the brain through holes in the skull and then connected to the transmitter by wires. More from MTB Europe…
Water, Seawater and Ocean Circulation and Dynamics Director, Odyssey Expeditions The oceans, the big blue, source of life, the hallmark of Earth. We hold the oceans within us, both physically and mentally. Vast, blue, tranquil, and treacherous, the oceans are the signature of our planet. The only planet in the solar system blessed with a liquid medium for life to evolve in. The motions of the atmosphere, traced out by clouds, and the size of the oceans dominate the view of earth from space. So vast are the oceans, in fact, that they take up almost 71% of the entire surface of the globe (139 million square miles). The oceans have an average depth of 12,230 feet (3730 m) and reach the deepest point in the Mariana Trench of the northwester Pacific Ocean, at 36,204 feet (11,038m) below sea level. The ocean basins hold at vast quantity of water, over 285 million cubic miles of water (1185 million cu. km.). This vast quantity of water arose from the Earth's interior as it cooled. The oceans are the largest repository of organisms on the planet, with representatives from all phylum's. Life is extremely abundant in the sea, from the obvious large whales, fish, corals, shrimp, krill and seaweed, to the microscopic bacteria floating freely in the seas. The bacteria is so abundant that just one spoonful of ocean water contains from 100 - 1,000,000 bacteria cells per cubic centimeter! The oceans contain the largest repository of organisms on the planet, and all the organisms in the ocean are subject to the properties of the seawater surrounding them. Water surrounds all marine organisms, composes the greater bulk of their bodies, and is the medium by which various chemical reactions take place, both inside and outside of their bodies. In this chapter, we present the basic chemistry of water, a necessary step in understanding the interesting roles water plays as an extremely suitable medium for life. |Water itself is very simple. Each molecule of water is composed of two hydrogen atoms and one oxygen atom. The hydrogen atoms bond to the oxygen atom asymmetrically by sharing electrons (Each hydrogen atoms shares its only electron with the oxygen atom. The oxygen atoms receives the two electrons needed to complete its outer shell, making it a stable molecule.)| |Important interactions occur because of the electron sharing. The oxygen atoms tends to draw the electrons furnished by the Hydrogen atoms closer to its nucleus, creating an electrical separation and a polar molecule. The polar nature results in the hydrogen end (which ahs a positive charge) attracting the oxygen end (with a negative charge) of other adjacent water molecules. This forms Hydrogen Bonds between adjacent water molecules. These bonds are weak compared to the electron sharing bonds (6% as strong) and are easily broken and reformed.| The hydrogen bonding and polarity of water molecules is responsible for many of the unique characteristics and physical properties of water. Seawater is pure water plus dissolved solids and gases. The dissolved solids come from 'weathering' processes of the continental land masses rocks being dissolved by rain water and flowing out to sea with the rivers. The gases come from the atmosphere. As water is a universal solvent, many different compounds are dissolved in it. A 1kg sample of saltwater contains 35 g of dissolved compounds, including inorganic salts, organic compounds from living organisms, and dissolved gasses. The solid substances are known as 'salts' and their total amount in the water is referred to by a term known as Salinity (expressed as parts per thousand). Oceanic salinities generally range from 34 to 37 parts per thousand. Variations from place to place are due to factors such as rainfall, evaporation, biological activity and radioactive decay. Salinities are higher in the tropics due to high evaporation rates. Fresh supplies of salts are now being added to the oceans from the rivers at roughly the same rate that they are being removed by various physical, chemical and biological processes. |Inorganic salts compose most of the solid matter of the 'salts' (99.28%) These percentages remain constant regardless of the waters salinity; therefore, salinity can be measured by measuring just the concentration of one of the salts, such as chlorine. The remaining .72% of the 'salts' are inorganic salts crucial to life. These include phosphates, nitrates, (both nutrients required for photosynthesis) and silicon dioxide (required by diatoms to construct their glass skeletons). In contrast to the other salts, the nitrates and phosphates vary in concentration due to biological activity. In surface waters, where plants are actively in the process of photosynthesis, the nitrates and phosphates can be in short supply, limiting the amount of biological activity that can take place. Temperature is a very important physical parameter in the marine environment. It limits the distribution and ranges of ocean life by affecting the density, salinity, and concentration of dissolved gasses in the oceans, as well as influencing the metabolic rates and reproductive cycles of marine organisms. The seasonal range of temperature in the ocean is affected by latitude , depth, and proximity to the shore. Marine temperatures change gradually because of the heat capacity of water. In the abyssal zone, water temperatures are remarkably stable and remain virtually constant throughout the year. Similarly, in equatorial and polar marine regions, ocean temperatures change very little with season. Because the surface of the ocean is heated by sunlight, the depths are cooler. There is a minimum of vertical mixing, because the warm water cannot displace the dense, colder deep water. December 1995 Temperature Anomalies The waters of the ocean are in constant motion. Its movement ranges from strong currents such as the Gulf Stream, down to small swirls or eddies. What causes all of this motion? The short answer is: energy from the Sun, and the rotation of the Earth. The Sun drives oceanic circulation in two primary ways: The rotation of the Earth contributes to ocean circulation patterns: Consider the following: A missile fired due northwards from a launch pad at the equator. however, as the eastward velocity at the surface of the earth is greatest at the Equator and decreases towards the poles, as the missile travels north, the eastward velocity of the Earth below the missile becomes less and less. launched from the Equator has both its northerly firing velocity and an eastward velocity relative to the surface of the Earth at the equator. Its actual relative travel follows a resultant vector which is a combination of the two. |The path taken by a missile has a deflection attributed to the Coriolis force. The coriolis force increases with increasing latitude. The blue paths shown indicate the courses taken by a missile or any other body moving over the surface of the Earth without being strongly bound by friction. Because a missile is moving so fast, the amount that the Earth has 'turned beneath" it during its short flight is small. Winds and ocean currents, on the other hand, are slow moving, and so are significantly affected by the Coriolis force. The Coriolis force therefore has a significant effect on deflecting ocean currents. Marine Biology resources by Odyssey Expeditions Tropical Marine Biology Voyages
Asperger's syndrome is a developmental condition seen at Great Ormond Street Hospital (GOSH) that affects the way a person thinks, communicates and relates to other people. Asperger's syndrome is an autism spectrum disorder. The term is applied to people with autistic disorders who have good language skills and normal-range intelligence. Autistic spectrum disorders affect approximately one in 100 families in the UK. Doctors don't yet know exactly what causes Asperger's syndrome. There is no doubt that the risk of developing the condition is strongly influenced by genetic inheritance. Autistic disorders often run in families, but in some cases the genetic problem is a chance occurrence. Many young people with Asperger's syndrome may find it easiest to talk to others about subjects they themselves are interested in. Talking to people you don't know well and making social ‘chat’ is very much harder. Most people with Asperger's syndrome would like to be able to make new friends, but a few prefer to do things on their own. Young people with Asperger's syndrome may also have times when they may get into difficulties in social situations, because they find it hard to figure out what to say, or they do things or say things that others find irritating. They might get told off at school, although they never intended or believed that they were doing something wrong. Specific and string interests People with Asperger's syndrome might be very interested in special things such as technology or films, or certain sorts of music, or Manga. These interests and habits may take up a lot of time, even become so all absorbing that they stop the young person from trying new or different things. Others feel more comfortable when things are done in a certain way. They may always go to school the same way or go to the same shop for particular items. Or they may have to keep a detailed diary of their activities, and become quite insistent that events happen at a specific time. Areas of strength Young people with Asperger's syndrome can also have a lot of strengths. They may be very precise and logical. They often pay attention to detail and remember details that most people would forget. Some have expert knowledge on their special interest. Many young people also have very good memory for particular things, such as routes, song lyrics or quotes from movies. It is not uncommon to have a good sense of humour and this can be a good way of entertaining friends. Some young people with Asperger's syndrome can find it very hard to ask for help and they prefer to find their own way around their difficulties. It is also common for people with Asperger’s syndrome to feel quickly overwhelmed by their emotions and feel unable to cope with their feelings in certain situations. This may mean they can experience more frustration, anxiety and sadness than other teenagers. Anxiety, being easily worried and lacking in confidence is not a part of Asperger's syndrome but it is very common for people on the autistic spectrum to experience these feelings. During adolescence a loss of self-esteem is sometimes the reason why other people recognise that you might have Asperger's syndrome and that you might benefit from help and support. There is no medical test for Asperger's syndrome. The condition can be diagnosed by doctors talking to your parents about your development and talking to you about the sorts of things that you find easiest, as well as those things you find most difficult. The central problems in Asperger’s syndrome include not really understanding other people’s social behaviour, feeling ‘different’ as if you don’t really ‘get’ why other people behave as they do, and behaving in ways that make others think you are weird without meaning to do so. Asperger's syndrome can't be 'treated' but there are things you and the people around you can do to help you make the most of your skills and reach your full potential. Books and websites can give lots of useful information about Asperger’s syndrome and strategies that other people have found helpful. The National Autistic Society (NAS) is a good place to start. There are local support groups for people with autism spectrum disorder and their families in most areas, either run by the NAS or other organisations. Often just knowing that other people have experienced similar difficulties is a big relief by reading and speaking to others. People with Asperger’s syndrome often find it helpful to think carefully about their environment and their daily routine to make their lives easier. For example: - Many people find having a visual daily timetable helps them to anticipate what will happen through the day. - Some people like to carry some string or blu-tac in their pockets that they can fiddle with because it helps them to concentrate or relax. - People may find it useful to have a communication card that they can give to a teacher or parent when they need a few minutes of time out if feeling overwhelmed by their feelings. - A psychologist may be able to work with you to develop strategies on how to overcome some difficulties, for example how to manage anger, worries or upsetting thoughts. - Parents, teachers and doctors can work together to ensure that you have the support that you need. For example, helping you with problems such as bullying or difficulties with making friends. - Teachers can work with you to support you within the classroom with areas of work you might be finding difficult.
They are associated with epilepsy, a chronic neurological disorder characterized by the occurrence of unprovoked seizures. More than 50 million people have epilepsy worldwide, and 85 percent of those cases occur in developing countries. It is estimated that, globally, there are 2.4 million new cases each year. Epilepsy can start at any age and be idiopathic — arising from an uncertain cause — or symptomatic — having a known or presumed cause. Most idiopathic epilepsies probably are due to the inheritance of one or more mutant genes, often a mutant ion channel gene. Symptomatic epilepsies result from a wide variety of brain diseases or injuries, including birth trauma, head injury, neurodegenerative disease, brain infection, brain tumor, or stroke. Epilepsies can be either generalized or partial. Generalized seizures typically result in loss of consciousness and can cause a range of behavioral changes, including convulsions or sudden changes in muscle tone. They occur when there is simultaneous excessive electrical activity over a wide area of the brain, often involving the thalamus and cerebral cortex. Partial epilepsies, however, are characterized by seizures in which the individual maintains consciousness or has altered awareness and behavioral changes. Partial seizures can produce localized visual, auditory, and skin sensory disturbances; repetitive uncontrolled movements; or confused, automatic behaviors. Such seizures arise from excessive electrical activity in one area of the brain, such as a restricted cortical or hippocampal area. Many antiepileptic drugs are available. Their principal targets are either ion channels or neurotransmitter receptors. Generalized epilepsies often are readily controlled by antiepileptic drugs, with up to 80 percent of patients seizure-free with treatment. Unfortunately, partial epilepsies are generally more difficult to treat. Often, they can be controlled with a single antiepileptic that prevents seizures or lessens their frequency, but sometimes a combination of these drugs is necessary. Identification of the mutated genes underlying epilepsy may provide new targets for the next generation of antiseizure drugs. Surgery is an excellent option for patients with specific types of partial seizures who do not respond to antiepileptic drugs. Electrical recordings of brain activity from patients allow for precise localization of the brain area from which the partial seizures originate. Once this area has been found, neurosurgeons can then remove it. After surgery, most properly selected patients experience improvement or complete remission of seizures for at least several years. A new form of epilepsy treatment, electrical stimulation therapy, was introduced as another option for hard-to-control partial seizures. An implanted device delivers small bursts of electrical energy to the brain via the vagus nerve on the side of the neck. While not curative, vagal nerve stimulation has been shown to reduce the frequency of partial seizures in many patients.
For the second time in just a few years, NASA’s Hubble Space Telescope has detected what scientists believe to be a giant water plume ejecting from the frozen surface of Jupiter’s moon Europa. This second plume is smaller than the one detected by Hubble in March 2014. However, it does lead scientists to believe their speculation about data collected during the first eruption were correct. The Technique Used In Detecting This Second Probable Water Plume on Europa Scientists used what is called the “transit technique” to detect a giant water plume. Several teams have used Hubble to do this. As it is, the two plumes detected on Europa were found by two different teams. Basically, Hubble is used in detecting a decrease in radiation emitted from Jupiter as Europa moves between the gas giant and Earth. When a plume erupts, it further distorts that radiation signature. For years, astronomers have speculated about the potential for a liquid water ocean underneath Europa’s icy surface. This liquid zone would be kept from freezing by the internal geothermal heat created by Europa’s core, and the torsion caused by its parent, the gas giant, Jupiter. These plumes are evidence that, from time to time, this liquid layer erupts through the frozen crust in giant plumes that shoot into space. Hubble has now twice detected evidence that scientists believe prove these plumes exist. Therefore, so could the liquid layer of water. “It’s not completely unequivocal, but in my mind, the pendulum has swung from caution to optimism.” This is according to the lead scientist William Sparks of the Space Telescope Science Institute in Baltimore. He also stated that, as the event repeated itself, it couldn’t have happened just by chance. At least not according to “a formal statistical sense”. What really makes this discovery important is the fact that liquid water is considered one of the key ingredients to life in the universe, as is the very geothermal energy that drives the plume eruptions.
Audiobooks have been used in classrooms for decades because listening builds critical listening, comprehension and fluency skills. Oral language precedes written language (i.e. developmentally), making listening to an important component of language acquisition. In fact, there are reading methods such as The Daily 5 which include listening as a component of reading instruction. As well, the new Common Core Standards spell out specific Listening and Speaking skills required of all primary grade students. Audiobooks are a proven resource for teaching reading, at all reading levels. Among the many the benefits, listening builds critical vocabulary, comprehension, fluency and listening skills; in fact, adding a listening component to reading instruction has been shown to improve student achievement. Instructional methods such as The Daily 5 do just that, and the Common Core Standards spell out specific listening requirements by grade level. And, oh yeah, kids love listening to books and stories. Among the many benefits, audiobooks: - reinforce literacy skills. - supplement reading instruction to develop a positive attitude towards literature. - develop an understanding of content area material when decoding or other literacy skills are delayed. - model the appropriate use of oral vocabulary, fluent reading, and use of phonics. - bring literature from the classroom to home and back on a portable device. Tales2Go is an award-winning kids’ mobile audiobook service that streams thousands of name-brand titles from leading publishers and storytellers to mobile devices and desktops in the classroom and beyond.
Inside your body there is an amazing protection mechanism called the immune system. It is designed to defend you against millions of bacteria, microbes, viruses, toxins and parasites that would love to invade your body. The immune system is made up of special cells and chemicals that fight infection. The white blood cells that make up the immune system are made in the bone marrow. These cells move through blood and tissue. Every time a microbe (germ) is overcome, the immune system remembers that microbe. If the body comes in contact with that microbe again, it will be defeated quickly. The immune system is one of the most remarkable and complex systems within the human body. When you realise that the immune system has the ability to produce a million specific straight-jackets’ (called anti-bodies) within a minute and to recognise and disarm a billion different invaders (called antigens), the strategy of boosting immune power make a lot of sense. The ability to react rapidly to a new invader is the difference between a minor 24 hour cold or stomach bug and a week in bed with flu or food poisoning It may also be the difference between a non-malignant lump and breast cancer, or symptom-free HIV infection and full-blown AlDS. The main ‘gates’ for the body are the digestive tract, which lets in food, and the lungs, which let in air. Within the digestive tract is the ‘gut-associated immune system’ which is programmed to allow completely digested food particles, such as amino adds, fatty adds and simple sugars, to pass unhindered through the gut wall into the body. Incompletely digested food can result in immune reactions, especially the large food molecule passes into the bloodstream. This is often the basis of a food allergy. The nasal passages help to prevent unwanted agents from entering the lungs. Having healthy and strong ‘inside skin’ in the lungs and digestive tract is the first defence against invaders. At any time there are a small number of immune cells roaming the body. Many of these cells have a short life. T-cells, for example, live for about four days. When an invader is identified new troops are produced in the bone marrow and thymus, and trained and posted in forts such as lymph nodes, the tonsils, appendix, spleen and Peyer’s patches. Lymphatic vessels drain into these forts bringing in invaders for their destruction. That is why lymph nodes, for example in the neck, arm pits and groin, become inflamed during an infection. This means they’re doing their job. Since the lymphatic system doesn’t have a pump, (lymphatic fluid is moved along by muscle movement), physical exercise is important for lymphatic drainage. Since no nutrients work in isolation it’s good to supplement a good high strength multivitamin and mineral. The combination of nutrients at even modest levels can have a strong effect on boosting immunity. How do you naturally boost your immune system? Here are some important points to take into account: Our age management check-up and treatment will ideally help you to adapt your diet, lifestyle to your personnal needs and thus improve and balance your immune system.
By 2030 this Goal aims to achieve sustainable management and efficient use of natural resources; halve per capita global food waste and reduce food losses along production and supply chains; substantially reduce overall waste generation; and minimize the impact of waste chemicals on the environment. There are also a range of targets to promote sustainable practices and reduce wasteful consumption. Read more on the UN SDGs website… You could use this film clip to start off a lesson exploring issues around consumption. When people in the West throw their clothes away, their cast-offs might go on a journey east, across the oceans, to Panipat in northern India, where they are recycled back into yarn. In this film we meet some of the garment recyclers, who are curious about the people who threw away their clothes ‘practically unworn’. This clip is a one-minute trailer; you can view the full 14-minute documentary, ‘Unravel’, on the Aeon website. Here are some teaching ideas for exploring this Goal through different subjects: English / Mother tongue: Students could write an informative piece, perhaps for a school newspaper, explaining ways that people can lead more sustainable lifestyles. Maths: Students could collect data on food waste in their home or school canteen and then learn to present and analyse the data effectively. They could then explore ways to reduce food waste. Alternatively, students could compare food waste statistics for their country with others around the world; why do they think some countries waste more food than others? Geography: You could research the supply chain of a resource or product – where does it come from, who are all the people involved in bringing the finished product to shops, who earns what money? Classic examples include bananas, chocolate, a pair of jeans and now palm oil. Science: Students could investigate the impact of pesticides and other chemicals on the environment. For examples, neonicotinoids and bees. Why do we need them? How do they work? What impact do they have? What are the alternatives? Understanding Sustainable Living – 60 mins, ages 11 to 14. Suitable for Social Studies, Geography, Science. An Energy Project for the Global Goals – 60 mins, ages 8 to 11, kicking off a six-month project. Suitable for Geography, Maths. Get the posterDownload Click on the images below to explore the other Global Goals:
Sort, order and classify objects by attribute and identify objects that do not belong in a particular group. 0006.3.4 Links verified on 12/16/2009 - Astronomy Shape Match - click and drag objects to match shape outlines - Buzzing with Shapes (2 player game) - Be the first to fill a row (like tic-tac-toe). Players must select the number of sides in a shape. - Kinderweb - Interactive educational games geared for the beginning of the school year or preschoolers. Students practice their colors and shapes. This site is completely audio so children can work at their own pace independently. - I Spy Shapes - locate triangles, circles and squares in a set of pictures - Measuring Up with Clifford - follow Clifford's instructions to click on the smallest or largest - Oddball - find the shape that does not match (from FunBrain) - Paint the Shapes - A listening and following directions game identifying shapes and colors. - Put it on the Shelf - Replace the question mark with the shape that matches the outline. - Shape Books - Great site that has shape book patterns for making little books and posters. You can find shapes for most of the the major themes of K such as nature, animals, transportation, holidays, insects and many others. - Shape Match - Drag and drop the shape on the correct match. - Shapes - for pre-school, identify shapes and colors - Shapeville - Click on the shape you find in the pictures - Story of Shapes - from Pre-School Library - Virtual Goose - You must match the egg the goose is sitting to one of the other four eggs. Caution, the eggs will have been turned.
How to Raise and Attract Mason Bees The mason bee namesake comes from their habit of sealing their nests with mud. They typically nest in hollow reeds or holes in wood made by wood-boring insects. The blue orchard mason bee and the hornfaced mason bee are the most common here in Central Pennsylvania, although there are many. Mason bees are excellent pollinators in the early spring when many fruit trees are blooming. They are more efficient pollinators than honeybees and they only travel up to 300 feet from their nests, so if you grow mason bees close to your food forest, they will be there. Mason bees are a solitary bee. They do not produce wax or honey. Every female is fertile and makes her own nest. The bees emerge from their cocoons in the early spring. Males emerge first. They wait for the females. When the females emerge, they mate. The males die, and the females begin work on their nests. Females visit flowers to gather pollen and nectar. They create a mass of pollen and nectar. Once this is complete, the mason bee lays an egg on top of the pollen/ nectar mass. Then she creates a partition of mud, which also forms back of the next cell. This process continues until she’s filled the hole. Female eggs are laid in the back of the nest, and male eggs towards the front. Once a bee has finished with a nest, she plugs the entrance to the tube, and then may seek out another nest spot. By summer, the larva has consumed all of its stores and then spins a cocoon around itself, entering the pupal stage. The adult matures in winter, hibernating inside its cocoon. Benefits of Mason Bees 1. Excellent early spring pollinators 2. Will only sting if stepped on or squeezed. 3. Adds diversity of beneficial insects. 4. Inexpensive to attract or raise. How to Encourage Mason Bees You can purchase mason bee cocoons and bring them to your site, or you can simply encourage them to come by providing habitat. I think you must provide suitable habitat either way, because if you purchase mason bees, but do not provide suitable habitat, they will not survive. To provide good habitat, you need the following: 1. A good source of mud. My bees get mud from a clay pond forty feet from my nests. 2. Early spring sources of nectar. If you have fruit trees, stay away from chemicals, and have weeds such as dandelion, you should be fine. As an aside, I have a lot of Nanking Cherries. This tree is actually a miniature plum tree/ bush. It blooms first, so it is a good source of early season nectar. 3. Provide nesting sites. This may already be occurring, with wood boring insects creating sites for you or simply in hollow twigs. If not, you may want to drill 5/16” holes 4-8” deep in untreated wood blocks to provide nesting sites. The benefit of this is that it is easy and almost free if you have access to scrap wood. The negative is that, eventually they will stop using the block of wood as disease and pest pressures build, so you have to constantly put out new blocks each year or two. How to Raise Mason Bees 1. Purchase your initial stock of cocoons. 2. Place the cocoons next to your nesting site when you are ready for them to emerge. I waited until the first blooms on my Nanking Cherries opened. I kept them in a baby food jar with holes in the top for air in my root cellar before that, so they would not emerge too early. Be careful, if you get a few days in the 50’s, they will start to emerge. Some people keep them in the fridge, when they get close to the spring, but make sure they have enough humidity. They should not be kept in the fridge for more than 2 or 3 weeks. People do that by putting them in the vegetable drawer or with a damp sponge. 3. Your nesting site should be 5/16” tubes 4-8” deep. Some people use wooden blocks, but cardboard tubes with parchment paper liner is better for harvesting cocoons and avoiding disease and pest pressures. If you use the wooden blocks, you will not harvest the cocoons. If you use cardboard tubes with parchment paper, in the winter you will harvest the cocoons and line the cardboard tubes with fresh parchment paper. This will keep your bees pest free. 4. Once your tubes are filled and sealed by your mason bees, it is important to prevent parasitic wasps from entering. This is usually late-May here in Central PA. You can prevent the wasps from parasitizing your cocoons by sealing your nests with bug screening. 5. Winter is a good time to harvest your cocoons, clean them, and place them in a cool dark place, until early spring. It is important not to expose your cocoons to too much time in a warm environment, otherwise they can hatch. No more than an hour, so take a few tubes in at a time. I placed my cocoons in a baby jar with holes cut in the lid for air then I put the jar in my cold cellar. 6. Go back to #2. After doing the cardboard tubes with the parchment paper, I think it is too much trouble to harvest and clean cocoons. It’s not hard, but I feel like there has to be an easier way. I will still harvest my cocoons, and I think it is a good way to bring mason bees to your site, but I like the idea of providing the habitat for the bees to continue on their own. I’ve already got the mud with a pond, and plenty of early flowering plants. I will be on the lookout for blocks of natural wood that I can drill and place around my property. I know that these blocks of wood will only work for a couple of seasons, but if you have access to free natural wood, drilling out some blocks every other year is easier than harvesting, cleaning cocoons, and installing new parchment paper.
From Wikipedia, the free encyclopedia & *eHow.com Duckweeds, or water lentils, are aquatic plants which float on or just beneath the surface of still or slow-moving fresh water bodies. They arose from within the arum or aroid family, (Araceae), and therefore, often are classified as the subfamily Lemnoideae within the Araceae. Classifications created prior to the approximate end of the twentieth century tend to classify them as a separate family, Lemnaceae. These plants are very simple, lacking an obvious stem or leaves. They consist of a small ‘thalloid’ or plate-like structure that floats on or just under the water surface, with or without simple rootlets. The plants are highly reduced from their earlier relatives in Araceae. Reproduction is mostly by asexual budding, but occasionally three tiny ‘flowers’ consisting of two stamens and a pistil are produced and sexual reproduction occurs. Some view this ‘flower’ as a pseudanthium, or reduced inflorescence, with three flowers that are distinctly either female or male and which are derived from the spadix in Araceae. Anatomical research regarding the mechanics of this process has not been completed or remains ambiguous due to considerable evolutionary reduction of these plants from their earlier relatives. The flower of Wolffia is the smallest known flower in the world, measuring merely 0.3 mm long. The fruit produced through this occasional sexual reproduction is a utricle, and a seed is produced in a sac containing air that facilitates flotation. Duckweed in various environments Duckweed is an important high-protein food source for waterfowl and also is eaten by humans in some parts of Southeast Asia (as khai-nam). Sometimes it is cited as an overlooked source for application as a food for a hungry world that produces more protein than soybeans. Some duckweeds are introduced into freshwater aquariums and ponds where they may spread rapidly. This introduction may be deliberate or unintended and once established in a large pond, may be difficult to eradicate. Occurring naturally by being carried on the feathers, shells, and coats of native species, the plant is introduced readily by birds, turtles, reptiles, and aquatic mammals visiting multiple ponds, rivers, and lakes. In water bodies with constant currents or overflow, the plants are carried down the water channels and do not proliferate greatly. In some locations a cyclical pattern driven by weather patterns exists in which the plants proliferate greatly during low water flow periods, yet are carried away as rainy periods ensue. The tiny plants provide cover for fry of many aquatic species. The plants are used as shelter by pond water species such as bullfrogs and bluegills. They also provide shade and, although frequently confused with them, can reduce certain light-generated growths of photoautotrophic algae. The plants can provide nitrate removal, if cropped, and the duckweeds are important in the process of bioremediation because they grow rapidly, absorbing excess mineral nutrients, particularly nitrogen and phosphates. For these reasons they are touted as water purifiers of untapped value. The Swiss Department of Water and Sanitation in Developing Countries, SANDEC, associated with the Swiss Federal Institute for Environmental Science and Technology, asserts that as well as the food and agricultural values, duckweed also may be used for waste water treatment to capture toxins and for odor control, and, that if a mat of duckweed is maintained during harvesting for removal of the toxins captured thereby, it prevents the development of algae and controls the breeding of mosquitoes. The same publication provides an extensive list of references for many duckweed-related topics. These plants also may play a role in conservation of water because a cover of duckweed will reduce evaporation of water when compared to the rate of a similar size water body with a clear surface. The duckweeds long have been a taxonomic mystery, and usually have been considered to be their own family, Lemnaceae. They primarily reproduce asexually. Flowers, if present at all, are small. Roots are either very much reduced, or absent entirely. They were suspected of being related to the Araceae as long ago as 1876, but until the advent of molecular phylogenyit was difficult to test this hypothesis. Starting in 1995 studies began to confirm their placement in the Araceae and since then, most systematists consider them to be part of that family. Their position within their family has been slightly less clear, but several twenty-first century studies place them in the position shown below. They are not closely related to Pistia, however, which also is an aquatic plant in the family Araceae. The genera of duckweeds are: Spirodela, Landoltia, Lemna, Wolffiella, and Wolffia. In July 2008 the U.S. Department of Energy (DOE) Joint Genome Institute announced that the Community Sequencing Program would fund the sequencing the genome of the giant duckweed, Spirodela polyrhiza. This was a priority project for DOE in 2009. The research is intended to facilitate new biomass and bio-energy programs. Duckweed is being studied by researchers around the world as a possible source of clean energy. In the United States, in addition to being the subject of study by the DOE, bothRutgers University and North Carolina State University have ongoing projects to determine if duckweed might be a source of cost-effective, clean, renewable energy. Duckweed is a good candidate as a biofuel because as a biomass it grows rapidly, has 5 to 6 times as much starch as corn, and does not contribute to global warming. Duckweed is considered a carbon neutral energy source, because unlike most fuels, it actually removes carbon dioxide from the atmosphere. Duckweed also functions as a bioremediator by effectively filtering contaminants such as bacteria, nitrogen, phosphates, and other nutrients from naturally occurring bodies of water, constructed wetlands and waste water. One study in Australia surrounding aquaculture suggests that although duckweed is initially effective as a nutrient filter, over time some nutrient build-up returns. Duckweed is the world’s smallest flowering plant. Roughly 40 species of duckweed exist and grow worldwide; these very small aquatic plants grow so rapidly that a single floating colony can double in size in less than 48 hours. Duckweed grows best in temperate and tropical climates, and prefers areas with little wave or wake, where it is sheltered from the wind; though it has also been found in areas with extreme temperatures and growing conditions. Duckweed plants remove a very high percentage of nutrients from the water during their growth cycle, and so are potentially valuable both as a food source and as a water purification system. Fresh duckweed is between 90 and 95 percent water; this is not surprising since it is an aquatic mass with a low enough density to float. To measure its other nutritional values, scientists look at its dry mass. Duckweed has gained the attention of agricultural specialists interested in its potential as a feed supplement for livestock because of its high protein value. Duckweed grown in cultivated conditions can contain up to 45 percent crude protein in its dry mass. The chemical makeup of these proteins, rich in the essential amino acids lysine and methionine, make it compositionally more like an animal protein than a vegetable protein. The plant as it naturally occurs typically has a fiber content of between 15 and 30 percent. In ideal water conditions however, duckweed can be cultivated with as little as 5 percent fiber in its nutritional composition. As growth conditions improve and the fiber mass is minimized, the total amount of protein in the plant is maximized. Other Nutritional Elements The dry mass of duckweed, when tested, contained between 1.8 and 9.2 percent lipid tissue, and between 14.1 and 43.6 percent carbohydrates. Cultured duckweed, specifically, has demonstrated larger concentrations of certain trace minerals and pigments, like beta carotene and xanthophyll, nitrogen and phosphorus. Because of its rapid growth and its nutritional composition, duckweed is being studied as a potential food source for poultry, hog, cattle, and human consumption. The fiber content of popular feed grains like soy and milo can be as high as 50 percent, which is not readably digestible. Duckweed, as a feed source, could be broken down and more completely consumed by the animal, increasing feed conversion rates. Also, the whole duckweed plant can be used as feed, saving the processing expenses and plant waste associated with feeding grain. In terms of dry mass grown per acre, duckweed could be grown on 10 percent of the space required to produce a similar amount of soybeans, and would only require 20 percent of the space required to grow the equivalent amount of corn. Purchase Live Duckweed GardenPool.org proudly offers LIVE duckweed for sale. Click here for more information. For our best tips for growing duckweed, Click here.
Summary of Learning Approaches In exploring adult learning there are several key factors to consider when thinking about how people learn and the ways in which they make meaning of information and experience. The first is the approach to learning. This can occur on different levels. The most significant being a deep learning approach compared to a surface learning approach. - Learning to specifically meet course requirements - Studying unrelated bits of knowledge - Memorising facts and figures to repeat - No linking or connection of learning The surface approach to learning comes from “the intention to get the task out of the way with minimum trouble while appearing to meet course requirements” (Biggs, 2003, p14). This often includes rote learning content, filling an essay with detail rather than discussion and list points rather than providing background or context to the work. - Learning that seeks to understand and connect the concepts - Relates ideas to previous knowledge and experience - Explores links between evidence and conclusions - Critiques arguments and examines rationale The deep approach comes “from a felt need to engage the task appropriately and meaningfully, so the student tries to use the most appropriate cognitive activities for handling it” (Biggs, 2003, p16). Using this approach students make a real effort to connect with and understand what they are learning. This requires a strong base knowledge for students to then build on seeking both detailed information and trying to understand the bigger picture. - Learning to achieve highest possible grades in a course - Focused on assessment requirements and criteria - Effort to understand knowledge to demonstrate learning - Focused on perceived preferences of lecturer Strategic learning, can be considered to be a balance between the other two approaches. Some may place a negative connotation on surface learning whilst viewing deep learning in a more positive light but there is a place for surface learning to lay a base knowledge or terminology for deep learning to build on. How do you view the approaches to learning in your own context? Can you think of examples of where surface, deep and strategic learning occurs in your own context ? Further Reading and Links This is for those seeking more information, it is not core course material. Approaches to Study “Deep” and “Surface” - an easy to read site described by the author, James Atherton as a "quick and dirty" overview exploring deep and surface approaches to learning Deep and Surface Approaches to Learning - a page from within The Higher Education Academy's UK website that provides another perspective and more information although it does take the crude viewpoint that "deep is good, surface is bad, and we should teach in a way that encourages students to adopt a deep approach; although achieving this is not so easy". Biggs, J. (2003). Teaching for quality learning at University (2nd ed.). London: The Society for Research into Higher Education & Open University Press. Return to the GFS main course page
From Latin: tangere "to touch," A line that contacts an arc or circle at only one point. (See also Tangent (tan) function in a right triangle - trigonometry). Try this Drag the orange dot. The blue line will always remain a tangent to the circle. The blue line in the figure above is called the "tangent to the circle c". Another way of saying it is that the blue line is "tangential" to the circle c. (Pronounced "tan-gen-shull"). The line barely touches the circle at a single point. If the line were closer to the center of the circle, it would cut the circle in two places and would then be called a secant. In fact, you can think of the tangent as the limit case of a secant. As the secant line moves away from the center of the circle, the two points where it cuts the circle eventually merge into one and the line is then the tangent to the circle. As can be seen in the figure above, the tangent line is always at right angles to the radius at the point of contact. Tangents to two circles Given two circles, there are lines that are tangents to both of them at the same time. If the circles are separate (do not intersect), there are four possible common tangents: If the two circles touch at just one point, there are three possible tangent lines that are common to both: If the two circles touch at just one point, with one inside the other, there is just one line that is a tangent to both: If the circles overlap - i.e. intersect at two points, there are two tangents that are common to both: If the circles lie one inside the other, there are no tangents that are common to both. A tangent to the inner circle would be a secant of the outer circle. In trigonometry, the tangent of an angle in a right triangle is the ratio of the opposite side to the adjacent side. See Tangent (tan) function in a right triangle - trigonometry. In calculus, a line is a tangent to a curve if, at the single point of contact, it has the same slope as the curve. Other circle topics Equations of a circle Angles in a circle (C) 2011 Copyright Math Open Reference. All rights reserved
Two months after the oil leak in the Gulf of Mexico began gushing, the scale of the disaster has only increased. Sometimes scale can be difficult to visualize from news stories, but these oil spill visualization tools can help! Parents: Talk with your kids about how the size of the spill compares to geographic areas they might be familiar with (e.g. your county, the size of the national park you visited on vacation last year, etc.) Teachers: Have students try to identify land areas that might be the same size as the oil spill (e.g. small U.S. states, islands, and European countries), and then crunch the numbers to see how their guesses measure up.
What is an "event"? When a proton from the Tevatron hits an antiproton from the Tevatron, we call that an event As the particles from the collision travel through the detector, they (or most of them) interact with the detection equipment. Physicists write computer programs that translate these "hits" into a schematic called an event display The schematic on the left represents the detector as if you were looking at it from inside the beam pipe . Each of the lines you see is the path of a charged particle that was produced inside the detector. What is a track? Some parts of the detector at CDF are able to "see" a charged particle at several points along its trajectory. Physicists write computer programs that will turn these disparate hits into a full trajectory, or track, for each particle. Two ways to view an event display that illustrates the energies collected by the CDF calorimeters. In the top image, energies are recorded based on their angle to the beam pipe (which runs along the line marked "0"). The bottom diagram shows the energies relative to a plane perpendicular to the beam pipe, as if you were staring straight into the beam. Other parts of the detector determine the total energy of the electrons, photons, and hadrons that hit them. Instead of a track, the results from these detectors are displayed as a bar graph, where a larger bar corresponds to particles with higher energy. An event display for the calorimeters, for example, could look something like the images on the right. The pink bars in both diagrams correspond to energy collected by the electromagnetic calorimeters, the blue to energy collected by the hadronic calorimeters. Notice that particles that leave very straight tracks tend to correspond to areas of high energy. This is because charged particles with greater energies do not react as greatly to the magnetic field set up within the detector. (For more information, read about momentum With eight million events happening each second, you might wonder how physicists keep the particles produced by one collision separate from the particles produced by the next. Each part of the detector is designed to transmit a "hit" to computers as quickly as possible. Now you're ready to visit the first part of the detector. You may find it useful to return to this page when you have completed the tour.
frequency selection to be accomplished in a different manner. It gives the circuit different characteristics. The first of these characteristics is the ability to store energy. The Characteristics of a Typical Parallel-Resonant Circuit Look at figure 1-11. In this circuit, as in other parallel circuits, the voltage is the same across the inductor and capacitor. The currents through the components vary inversely with their reactances in accordance with Ohm's law. The total current drawn by the circuit is the vector sum of the two individual component currents. Finally, these two currents, IL and IC, are 180 degrees out of phase because the effects of L and C are opposite. There is not a single fact new to you in the above. It is all based on what you have learned previously about parallel a.c. circuits that contain L and C. Figure 1-11.Curves of impedance and current in an RLC parallel-resonant circuit. Now, at resonance, XL is still equal to X C. Therefore, IL must equal IC. Remember, the voltage is the same; the reactances are equal; therefore, according to Ohm's law, the currents must be equal. But, don't forget, even though the currents are equal, they are still opposites. That is, if the current is flowing "up" in the capacitor, it is flowing "down" in the coil, and vice versa. In effect, while the one component draws current, the other returns it to the source. The net effect of this "give and take action" is that zero current is drawn from the source at resonance. The two currents yield a total current of zero amperes because they are exactly equal and opposite at resonance. A circuit that is completed and has a voltage applied, but has zero current, must have an INFINITE IMPEDANCE (apply Ohm's law any voltage divided by zero yields infinity).
Mars has a thin atmosphere — too thin to easily support life as we know it. The extremely thin air on Mars can also become very dusty. What is Mars' atmosphere made of? The atmosphere of Mars is about 100 times thinner than Earth's, and it is 95 percent carbon dioxide. Here's a breakdown of its composition: - Carbon dioxide: 95.32 percent - Nitrogen: 2.7 percent - Argon: 1.6 percent - Oxygen: 0.13 percent - Carbon monoxide: 0.08 percent - Also, minor amounts of: water, nitrogen oxide, neon, hydrogen-deuterium-oxygen, krypton and xenon Climate and weather Mars is much colder than Earth, in large part due to its greater distance from the sun. The average temperature is about minus 80 degrees F (minus 60 degrees C), although it can vary from minus 195 degrees F (minus 125 degrees C) near the poles during the winter to as much as a comfortable 70 degrees F (20 degrees C) at midday near the equator. The atmosphere of Mars is also roughly 100 times thinner than Earth's, but it is still thick enough to support weather, clouds and winds. Giant dust devils routinely kick up the oxidized iron dust that covers Mars' surface. The dust storms of Mars are the largest in the solar system, capable of blanketing the entire planet and lasting for months. One theory as to why dust storms can grow so big on Mars starts with airborne dust particles absorbing sunlight, warming the Martian atmosphere in their vicinity. Warm pockets of air flow toward colder regions, generating winds. Strong winds lift more dust off the ground, which in turn heats the atmosphere, raising more wind and kicking up more dust. At times, it even snows on Mars. The Martian snowflakes, made of carbon dioxide rather than water, are thought to be about the size of red blood cells. The north and south polar regions of Mars are capped by ice, much of it made from carbon dioxide, not water. Possibility of Life Mars could have once harbored life. Some conjecture that life might still exist there today. A number of researchers have even speculated that life on Earth might have seeded Mars, or that life on Mars seeded Earth. Oceans may have covered the surface of Mars in the past, providing an environment for life to develop. Although the red planet is a cold desert today, researchers suggest that liquid water may be present underground, providing a potential refuge for any life that might still exist there. Several studies have shown that there is abundant water ice beneath the surface. — Tim Sharp, Reference Editor - How Big is Mars? - How Far Away is Mars? - What is Mars Made Of? - What is the Temperature of Mars? - How Was Mars Made? - Photos of Mars: The Amazing Red Planet - Mars the Red Planet: Latest News and Discoveries
It’s summer in the northern hemisphere, which means picnics, sunscreen, and lots of time outdoors. At least … until it gets too hot and everyone wants to go back inside. Depending on where you live, heat and humidity can become really unbearable – even dangerous. And things like electric fans, or a trip to the pool, can help you keep cool. But in many places, you can just duck into an air-conditioned building. We take air conditioners for granted, but they’re one of the most influential inventions of the 20th century. They’ve changed how we work, where we spend our free time, and even where people can live in the first place. As far back as the 19th century, inventors were experimenting with fans and ice to create some sort of machine that cooled air. Because, everyone wants to be comfy. But the reason for inventing air conditioning was, ultimately, an economic one: In 1902, a publishing company in New York was having problems with the hot, humid summer air wrinkling its magazine pages, blurring its ink and messing up its printing machinery. So it asked young engineer Willis Carrier to figure out a solution. Carrier came up with an “Apparatus for Treating Air,” which was designed to purify the air inside a building and adjust the amount of water vapor in it – the humidity. His apparatus worked by using a fan to push air through multiple chambers. In one chamber, air could be sprayed with water, before moving through plates that acted as filters to take out dust. Then, the air could blow over a set of coiled pipes with a chemical inside that could either heat or cool the air, to alter its humidity. If the chemical was a coolant, it made the air colder, and caused some of the water vapor in the air to condense into a liquid – lowering the air’s humidity. Over the decades, lots of engineers refined this technology, and eventually the first compact, modern air conditioner was made. Today, the air conditioner you might have in your house or office works basically like Carrier’s. And it’s all based on the thermodynamic fact that heat energy flows from hot areas to cold areas. The a/c you have sticking in your window right now basically uses a fluid – which can be either a liquid or a gas – to move the heat that’s inside your room and take it outside. That fluid is the key. It can be one of any number of chemicals that converts easily between a gas and a liquid, and the rest of your a/c unit just moves that chemical around, making it either expand or compress, at just the right times. The system starts with a fan that blows the warm air in your room over a coil that’s filled with the liquid. The liquid is cooler than the ambient air, so the heat from the warm air transfers into the liquid, and then a fan blows the cooled air back into your room. At the same time, warm water vapor also condenses on these cool coils and collects in a drain pan, which reduces the humidity, and causes the dripping you often see from some a/c units. But all the heat that’s been absorbed has to go somewhere. And when the liquid has received enough heat energy, it evaporates – turns into a gas. That gas in the coils is then funneled into another section of the a/c, where it’s compressed back into a liquid. And as the chemical condenses, it releases heat energy – that heat is then transferred to the outside air that’s blowing over the coils. Then, the coolant is ready to cycle back through and remove more heat. Initially, air conditioners like Carrier’s used chemicals like ammonia or propane to transfer heat, but these were really dangerous when they leaked. By the mid-1930s, most refrigerators and air conditioners used chlorofluorocarbons – sometimes known under the trademark Freon – which were non-flammable, and safer to use. But scientists later realized that these compounds can release chlorine that reacts with ozone molecules in the atmosphere, depleting the Earth’s ozone layer, which protects us from UV radiation from the sun. So, in the 1980s, manufacturers gradually stopped using CFCs and replaced them with hydroflurocarbons or HFCs, which are similar to CFCs, but don’t contain chlorine. So that’s how a relatively simple idea invented for a New York print shop transformed the world – from California to the Arabian Peninsula. Of course, today’s a/c units aren’t without their drawbacks – many refrigerants, if they leak into the atmosphere, act as potent greenhouse gases, and the amount of energy that air conditioners use is a real challenge for many communities. That’s why new standards are evolving, and more energy efficient technologies are being developed. These include air conditioners that can use heat pumps, and water, instead of chemicals like HFCs as a refrigerant. There are even designs in the works that take out the fluid-filled coils altogether, and use metal rods that are heated and cooled using magnetic fields. So, the future is still looking pretty cool. It just might be cool in different and better ways. Thanks for watching this episode of SciShow, and thanks to Emerson for sponsoring it. If you want to keep getting smarter with us, just go to youtube.com/scishow and subscribe..
Vocabulary Set 4 Test on October 31st 1. author’s purpose – the reason an author decides to write about a specific topic The three main purposes we will discuss in class are PIE (persuade, inform, and entertain). BE ABLE TO TELL THE PURPOSE OF A PARTICULAR PARAGRAPH. 2. purpose – a plan, cause, or reason for something 3. persuade – used to convince or influence **the author will give facts or examples to support his/her opinion ****Examples might include advertisements, commercials, newspaper editorials, etc. Example of an author trying to persuade you: You certainly shouldn’t watch too much television because it’s a waste of time, there’s too much violence shown, and you miss valuable time with friends and family. 4. inform – to give important information by supplying facts; the facts are used to teach, NOT to persuade. ****Examples might include textbooks, cookbooks, newspapers, encyclopedias, school newsletters, research papers, instructions, maps, graphs, etc. Example of an author informing the reader: While on vacation, our family experienced seeing a beluga whale. We knew it was a rare opportunity, so we felt blessed. Our tour guide told us that the beluga, or white whale, is one of the smallest species of whales. Their unique color and rounded foreheads make them easily identifiable. We also learned that they have no dorsal fin. 5. entertain – to tell a story or describe real or imaginary characters, places, and events (can be humorous to make the readers laugh and makes the reader feel emotions like happy, scared, sad, etc.) ****Examples might include fictional stories, poems, stories, plays, comic strips, jokes, riddles, etc. Example of an author entertaining the reader: One time my friends and I went to Six Flags. We rode almost every ride in the amusement park. Our favorite ride was Mr. Freeze. I thought I was going to faint when I realized it also went backwards. 6. justify – to provide evidence or give reasons for (PROOF) 7. main idea – A main idea is important information that tells more about the overall idea of a paragraph or section of a text. 8. details – specific details that support the main idea 9. cause – an event or action that causes something else to happen BECAUSE THE GIRL WAS COLD (CAUSE), she turned up the heat (effect). 10. effect – An event or action that happened as a result of another event or action. It answers the question “what happened?” I was hungry (cause), SO I ATE (EFFECT). bar graph – a graph that shows data using bars frequency table – a table that shows data using numbers sum – the answer to an addition problem pictograph – a graph that shows data using pictures or symbols life cycle – a series of changes that something goes through during its life
An international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting images capture the necessary scientific detail but highlight a remarkable, usually unseen beauty. So far, scientists have determined that sunspots are linked with massive ejections of charged plasma that can cause extremely powerful geomagnetic storms that can disrupt communications and navigational systems. Sick of Ads? Join more than 500 New Atlas Plus subscribers who read our newsletter and website without ads. It's just US$19 a year.More Information Sunspots, first studied by Galileo, can also affect weather and influence subtle changes in climate patterns on Earth. Variations in solar output is also attributed to sunspots. The high-resolution simulations of sunspot pairs will hopefully lead researchers to learn more about the vast mysterious dark patches on the sun's surface. "This is the first time we have a model of an entire sunspot," says lead author Matthias Rempel, a scientist at NCAR's High Altitude Observatory. "If you want to understand all the drivers of Earth's atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the sun as well as connections between solar output and Earth's atmosphere." "Understanding complexities in the solar magnetic field is key to 'space weather' forecasting," says Richard Behnke of NSF's Division of Atmospheric Sciences. "If we can model sunspots, we may be able to predict them and be better prepared for the potential serious consequences here on Earth of these violent storms on the sun." The sunspot research was supported by the National Science Foundation (NSF). Outward flows from the center of sunspots were first discovered 100 years. Since then scientists have worked toward explaining their complex structures, whose number peaks and wanes during the 11-year solar cycle. Before the latest generation of supercomputers, modeling in this detail has been impossible. Now scientists are able to capture the convective flow and movement of energy in the sunspots, which is not directly detectable by instruments. The work was supported by the National Science Foundation, NCAR's sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago. How it works Scientists working on this project have developed new simulations that capture pairs of sunspots with opposite polarity. They reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties. The research team designed a virtual, three-dimensional domain that simulates an area on the sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth (which equals an expanse as long as eight times Earth's diameter and as deep as Earth's radius). The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR's new Bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second. The accuracy of the modeling was verified by a large network of ground- and space-based instruments. The new model is far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. This can only be completed when even more computing power is available. "Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the sun," says Michael Knoelker, director of NCAR's High Altitude Observatory and a co-author of the paper. "With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the sun's surface."View gallery - 12 images
Discerning sources to be used in a formal research paper or essay is one of the most crucial aspects of writing. Not all sources are reputable nor do they carry equal weight. This is true for print sources (books and journals) as well as internet websites. While the task of determining the reliability of sources may seem monumental, certain precautions will ensure that referenced material carries authority. Books and Encyclopedias When cataloging books obtained for research, begin with the author and publisher. To what extent does the author have expertise in the subject? A biography on Thomas Jefferson written by a college professor may carry more weight than a similar book written by someone without academic credentials tied to American History. Barbara Tuchman, a notable exception, never took a degree in history yet her books, written for the mass public, are meticulously researched and documented. Determining whether a book is an acceptable source can be based on several observable elements: Can you obtain a brief biography of the author, either from the dust jacket or the internet? What primary and secondary sources were used by the author? Is there a bibliography and an index? Who published the book? Mass media encyclopedias such as World Book and Britannica should not be used as paper sources. However, encyclopedias geared toward specific research areas are permitted: Encyclopedia of Slavic History, International Standard Bible Encyclopedia, and other similar works. Some mass media encyclopedic entries do not include the name of the author. A cardinal rule when researching a paper topic is to avoid any article that does not have an author. Internet Web Sites and Journals Student with access to academic data bases should have no problem finding reputable articles. Sites such as JSTOR feature excellently written and researched articles. Students without these internet resources can still find acceptable on-line sources. Internet bibliographic sites, such as Besthistorysites , provide students with scholarly sites, many maintained by universities or organizations dedicated to the topic being researched. URLs ending with edu or org are usually considered safe. Sites such as Spark Notes or Spartacus UK should never be used. Additionally, many instructors frown on the use of Wikipedia. Writing in the Chronicle of High Education, Brock Read noted that many professors believe that Wikipedia “devalues the notion of expertise” and contains a “dearth of scholarly contributions.” He cites several examples of false Recognizing Primary and Secondary Sources American Heritage, Smithsonian Magazine, and National Geographic are among many internet sites that allow users to read archived articles for free. Although written for a non-academic audience, such publications may contain well written articles by professionals that are deemed acceptable in a paper. The internet can be a useful source depending upon the research topic. The Colonial Williamsburg Journal has free archives full of articles pertaining to Colonial American History. Archaeology Today makes available hundreds of free articles on ancient world topics. Writing about Islam or the Middle East? Saudi Aramco World has an archive of superbly researched articles (the print editions are free). Most every text book includes primary source discs or on-line access to primary source documents. Used correctly, primary source documents enhance the research quality of the paper. Publications such as The New York Times, Spectator, and The Nation are excellent sources of original articles whether the writer is looking for a first hand account of the Battle of Gettysburg or the London Blitz. Some publication may charge a small fee to obtain article access. Finding appropriate paper sources is not difficult if students begin the research process early and avoid frivolous last minute internet searches that may not provide quality and reliability.
Practice Problems Answers (1) Identify the standard state (solid, liquid or gas) for the following elements: (3) This question uses the NIST Chemistry WebBook. What is the value for the standard enthalpy of formation, , for the following substances: (a) = −277.0 kJ/mol (b) = −483.52 kJ/mol (c) = −411.12 kJ/mol The trick: in the WebBook, the first value given is for NaCl as a liquid. However, the standard state for NaCl is solid, so the standard enthalpy is actually NOT the first one listed. (4) Write the full chemical equation of formation for the substances in question 3. (a) 2C (s, graphite) + 3H2(g) + 1⁄2O2(g) ---> C2H5OH(ℓ) (b) 2C (s, graphite) + 2H2(g) + O2(g) ---> CH3COOH(ℓ) (c) Na(s) + 1⁄2Cl2(g) ---> NaCl(s) Reminder that in the answers for number 4, the product is always written with a coefficient of one.
In conjunction with the seventh annual Digital Learning Day, Future Ready Schools (FRS), an initiative of the Alliance for Excellent Education (All4Ed), released a new guide for school districts interested in using “blended learning” to support their approach to instruction. Blended learning is a variety of practices and strategies that combine online learning with in-person instruction from classroom teachers. Blending Teaching and Technology: Simple Strategies for Improved Student Learning, showcases Lindsay Unified School District (LUSD). In this highly mobile rural district in California’s Central Valley, 86 percent of students come from low-income families and more than half are English language learners. The guide identifies potential challenges and opportunities districts may face and offers practical strategies for implementing blended learning aligned with seven key planning areas, known as the FRS “gears”: - Curriculum, instruction, and assessment - Personalized professional learning - Budget and resources - Community partnerships - Data and privacy - Robust infrastructure - Use of space and time Download Blending Teaching and Technology: Simple Strategies for Improved Student Learning at FutureReady.org/blendedlearning.
Earth Science Literacy Framework Big Ideas of Earth Science Earth is our home. We rely upon it for our existence in many different ways. Our planet's rocks, soils, and the chemical, physical, and biological processes that create and transform them, on the continents and beneath the oceans, produce resources and materials that sustain our way of life. Even modest changes to the Earth system, of which these are a part, have had profound influences on human societies and the course of civilization. It is important to understand the Earth sciences - to be Earth science literate - at this time in history. Many challenges facing humanity, from dwindling energy and mineral resources, to water shortages and changing global climate, directly relate to the Earth sciences. There are many difficult decisions that governments, local and national, will have to make concerning these issues. We need citizens and governments that are Earth science literate to create policies that appropriately weigh the importance of resource conservation, use, and sustainability. This Earth system science literacy guide identifies the Big Ideas and fundamental concepts that individuals and communities should understand to make informed decisions. Earth science literacy is very important if we are to understand how the entire Earth system and our climate function. For more information on this effort, please visit the Earth Science Literacy Initiative web site. In addition, the Earth Science Literacy Framework has been aligned with the National Science Education Standards. - Big Idea 1: Earth scientists use repeatable observations and testable ideas to understand and explain our planet. - Big Idea 2: Earth is 4.6 billion years old. - Big Idea 3: Earth is a complex system of interacting rock, water, air, and life. - Big Idea 4: Earth is continuously changing. - Big Idea 5: Earth is the water planet. - Big Idea 6: Life evolves on a dynamic Earth and continuously modifies Earth. - Big Idea 7: Humans depend on Earth for resources. - Big Idea 8: Natural hazards pose risks to humans. - Big Idea 9: Humans significantly alter the Earth.
What is a PL/SQL Record data type? A record data type represents a data type for that row in a database table. It lets u define your own records and not your own fields. Define and declare Records A record has its own name and type that stores group of related data items. %ROWTYPE helps you declare a rowtype.Example TYPE employeeRec IS RECORD Different Types of Records - Table-based, Cursor-based, Programmer-defined -Table based records :- Such records are based on tables. This means that the structure of the records is based on structure of the tables. The record field corresponds to a column of the table.Example Table structure:Cursor based CREATE TABLE salary (employee_code NUMBER (5), Lets say, record for the above table were named salary_rec, then the fields would each be referred to as: :- Such records are based on select list of a cursor. Each field in record is connected to some column in the cursor query.Example <record_name> <cursor_name>%ROWTYPE;Programmer defined :- These are defined by programmer and have got nothing to do with cursors or tables. In order to define it, first, a record TYPE containing the structure needs to be created and then this record type can be used in actual records having that structure.Example TYPE <type_name> IS RECORD Benefits of using Records They help you treat data as logical units. This makes it easier to organize and represent information. Guidelines for using Records a) Nested record types are not supported b) ROW cannot be used with a subquery. c) Record variables are not allowed in select, where or group by clause d) The keyword ROW is allowed only on the left side of a SET clause Rules you must follow for referencing a record in its entirety or a particular field in the record. Fields in a record are accessed by name.Example You must always use the fully qualified name of a field when referencing that field. There is no need to use dot notation when you reference the record as a whole; you simply provide the name of the record.
Emperor Heraclius defended the Byzantine Empire from the Persians, but lost the reconquered land to the Arabs shortly thereafter. Identify the reason for the reduction in size of the Byzantine Empire - After Justinian, the Byzantine Empire continued to lose land to the Persians. - Emperor Heraclius seized the throne in 610 CE, and beat back the Persians by 628 CE. - However, after Heraclius’ victory against the Persians, he had taken such losses that he was unable to defend the empire against the Arabs, and so they again lost the lands they had just reconquered by 641 CE. - Heraclius tried to unite all of the various religious factions within the empire with a new formula that was more inclusive and more elastic, called monothelitism, which was eventually deemed heretical by all factions. - Muhammad: The central figure of Islam, widely regarded as its founder. - Monothelitism: The view that Jesus Christ has two natures but only one will, a doctrine developed during Heraclius’ rule to bring unity to the Church. Conflict with the Persians and Chaos in the Empire Ever since the fall of the Western Roman Empire, the Eastern Roman Empire had continued to see western Europe as rightfully Imperial territory. However, only Justinian I attempted to enforce this claim with military might. Temporary success in the west was achieved at the cost of Persian dominance in the east, where the Byzantines were forced to pay tribute to avert war. However, after Justinian’s death, much of newly recovered Italy fell to the Lombards, and the Visigoths soon reduced the imperial holdings in Spain. At the same time, wars with the Persian Empire brought no conclusive victory. In 591 however, the long war was ended with a treaty favorable to Byzantium, which gained Armenia. Thus, after the death of Justinian’s successor, Tiberius II, Maurice sought to restore the prestige of the Empire. Even though the empire had gained smaller successes over the Slavs and Avars in pitched battles across the Danube, both enthusiasm for the army and faith in the government had lessened considerably. Unrest had reared its head in Byzantine cities as social and religious differences manifested themselves into Blue and Green factions that fought each other in the streets. The final blow to the government was a decision to cut the pay of its army in response to financial strains. The combined effect of an army revolt led by a junior officer named Phocas and major uprisings by the Greens and Blues forced Maurice to abdicate. The Senate approved Phocas as the new emperor, and Maurice, the last emperor of the Justinian Dynasty, was murdered along with his four sons. The Persian King Khosrau II responded by launching an assault on the empire, ostensibly to avenge Maurice, who had earlier helped him to regain his throne. Phocas was already alienating his supporters with his repressive rule (introducing torture on a large scale), and the Persians were able to capture Syria and Mesopotamia by 607. While the Persians were making headway in their conquest of the eastern provinces, Phocas chose to divide his subjects, rather than unite them against the threat of the Persians. Perhaps seeing his defeats as divine retribution, Phocas initiated a savage and bloody campaign to forcibly convert the Jews to Christianity. Persecutions and alienation of the Jews, a frontline people in the war against the Persians helped drive them into aiding the Persian conquerors. As Jews and Christians began tearing each other apart, some fled the butchery into Persian territory. Meanwhile, it appears that the disasters befalling the empire led the emperor into a state of paranoia. The Heraclian Dynasty Under Heraclius Due to the overwhelming crises that had pitched the empire into chaos, Heraclius the Younger now attempted to seize power from Phocas in an effort to better Byzantium’s fortunes. As the empire was led into anarchy, the Exarchate of Carthage remained relatively out of reach of Persian conquest. Far from the incompetent Imperial authority of the time, Heraclius, the Exarch of Carthage, with his brother Gregorius, began building up his forces to assault Constantinople. In 608, after cutting off the grain supply to the capital from his territory, Heraclius led a substantial army and a fleet to restore order in the Empire. The reign of Phocas officially ended in his execution, and the crowning of Heraclius by the Patriarch of Constantinople two days later on October 5, 610. After marrying his wife in an elaborate ceremony and being crowned by the Patriarch, the 36-year-old Heraclius set out to perform his work as emperor. The early portion of his reign yielded results reminiscent of Phocas’ reign, with respect to trouble in the Balkans. To recover from a seemingly endless string of defeats, Heraclius drew up a reconstruction plan of the military, financing it by fining those accused of corruption, increasing taxes, and debasing the currency to pay more soldiers and forced loans. Instead of facing the waves of invading Persians, he went around them, sailing over the Black Sea and regrouping in Armenia, where he found many Christian allies. From there, he invaded the Persian Empire. By fighting behind enemy lines, he caused the Persians to retreat from Byzantine lands. He defeated every Persian army sent against him and then threatened the Persian capital. In a panic, the Persians killed their king and replaced him with a new ruler who was willing to negotiate with the Byzantines. In 628 CE, the war ended with Heraclius’ defeat of the Persians. The Arab Invasion By this time, it was generally expected by the Byzantine populace that the emperor would lead Byzantium into a new age of glory. However, all of Heraclius’ achievements would come to naught, when, in 633, the Byzantine-Arab Wars began. On June 8, 632, the Islamic Prophet Muhammad died of a fever. However, the religion he left behind would transform the Middle East. In 633, the armies of Islam marched out of Arabia with a goal to spread the word of the prophet, with force if needed. In 634, the Arabs defeated a Byzantine force sent into Syria and captured Damascus. The arrival of another large Byzantine army outside Antioch (some 80,000 troops) forced the Arabs to retreat. The Byzantines advanced in May 636. However, a sandstorm blew in against the Byzantines on August 20, 636, and when the Arabs charged against them, they were utterly annihilated. Jerusalem surrendered to the Arabs in 637, following a stout resistance; in 638, the Caliph Omar rode into the city. Heraclius stopped by Jerusalem to recover the True Cross whilst it was under siege. The Arab invasions are seen by some historians as the start of the decline of the Byzantine Empire. Only parts of Syria and Cilicia would be recovered. The recovery of the eastern areas of the Roman Empire from the Persians during the early phase of Heraclius’ rule raised the problem of religious unity centering on the understanding of the true nature of Christ. Most of the inhabitants of these provinces were Monophysites who rejected the Council of Chalcedon of 451. The Chalcedonian Definition of Christ as being of two natures, divine and temporal, maintains that these two states remain distinct within the person of Christ and yet come together within his one true substance. This position was opposed by the Monophysites, who held that Christ possessed one nature only; the human and divine natures of Christ were fused into one new single (mono) nature. This internal division was dangerous for the Byzantine Empire, which was under constant threat from external enemies, many of whom were in favor of Monophysitism, people on the periphery of the Empire who also considered the religious hierarchy at Constantinople to be heretical and only interested in crushing their faith. Heraclius tried to unite all of the various factions within the empire with a new formula that was more inclusive and more elastic. With the successful conclusion to the Persian War, Heraclius would devote more time to promoting his compromise. The patriarch Sergius came up with a formula, which Heraclius released as the Ecthesis in 638. It forbade all mention of Christ possessing one or two energies, that is, one or two wills; instead, it now proclaimed that Christ, while possessing two natures, had but a single will. This approach seemed to be an acceptable compromise, and it secured widespread support throughout the east. The two remaining patriarchs in the east also gave their approval to the doctrine, now referred to as Monothelitism, and so it looked as if Heraclius would finally heal the divisions in the imperial church. Unfortunately, he had not counted on the popes at Rome. During that same year of 638, Pope Honorius I had died. His successor, Pope Severinus (640), condemned the Ecthesis outright, and so was forbidden his seat until 640. His successor, Pope John IV (640-42), also rejected the doctrine completely, leading to a major schism between the eastern and western halves of the Chalcedonian Church. When news reached Heraclius of the pope’s condemnation, he was already old and ill, and the news only hastened his death, declaring with his dying breath that the controversy was all due to Sergius, and that the patriarch had pressured him to give his unwilling approval to the Ecthesis. The Theme System The Byzantine-Arab wars wrought havoc on the Byzantine Dynasty, but led to the creation of the highly efficient military theme system. Diagram the Byzantine military and social structure under Heraclius - In the Byzantine-Arab wars of the Heraclian Dynasty, the Arabs nearly destroyed the Byzantine Empire altogether. - In order to fight back, the Byzantines created a new military system, known as the theme system, in which land was granted to farmers who, in return, would provide the empire with loyal soldiers. The efficiency of this system allowed the dynasty to keep hold of Asia Minor. - The Arabs were finally repulsed through the use of Greek fire, but Constantinople had decreased massively in size, due to relocation. - The empire was now poorer and society was dominated by the military, as a result of the many Arab invasions. - Caliphate: Islamic state led by a supreme religious and political leader, known as a caliph (i.e., “successor”) to Muhammad and the other prophets of Islam. - cosmopolitan: A city/place or person that embraces multicultural demographics. - Greek fire: A military weapon invented during the Byzantine Heraclian Dynasty; flaming projectiles that could burn while floating on water, and thus could be used for naval warfare. - theme system: A new military system created during the Heraclian Dynasty of the Byzantine Empire, in which land was granted to farmers who, in return, would provide the empire with loyal soldiers. Similar to the feudal system of medieval western Europe. The themes (themata in Greek) were the main administrative divisions of the middle Byzantine Empire. They were established in the mid-7th century in the aftermath of the Slavic invasion of the Balkans, and Muslim conquests of parts of Byzantine territory. The themes replaced the earlier provincial system established by Diocletian and Constantine the Great. In their origin, the first themes were created from the areas of encampment of the field armies of the East Roman army, and their names corresponded to the military units that had existed in those areas. The theme system reached its apogee in the 9th and 10th centuries, as older themes were split up and the conquest of territory resulted in the creation of new ones. The original theme system underwent significant changes in the 11th and 12th centuries, but the term remained in use as a provincial and financial circumscription, until the very end of the empire. During the late 6th and early 7th centuries, the Eastern Roman Empire was under frequent attack from all sides. The successors of Heraclius had to fight a desperate war against the Arabs in order to keep them from conquering the entire Byzantine Empire; these conflicts were known as the Byzantine-Arab wars. The Arab invasions were unlike any other threat the Byzantines ever faced. Fighting a zealous holy war for Islam, the Arabs defeated army after army of the Byzantines, and nearly destroyed the empire. Egypt fell to the Arabs in 642 CE, and Carthage as well in 647 CE, and the Eastern Mediterranean slightly later. From 674-678 CE the Arabs laid siege to Constantinople itself. In order to survive and fight back, the Byzantines created a new military system, known as the theme system. Abandoning the professional army inherited from the Roman past, the Byzantines granted land to farmers who, in return, would provide the empire with loyal soldiers. This was similar to the feudal system in medieval western Europe, but it differed in one important way—in the Byzantine theme system, the state continued to own the land, and simply leased it in exchange for service, whereas in the feudal system ownership of the lands was given over entirely to vassals. This efficiency of the theme system allowed the dynasty to keep hold of the imperial heartland of Asia Minor. Thus, by the turning of the 8th century, the themes had become the dominant feature of imperial administration. Their large size and power, however, made their generals prone to revolt, as had been evidenced in the turbulent period 695-715, and would again during the great revolt of Artabasdos in 741-742. Despite the prominence of the themes, it was some time before they became the basic unit of the imperial administrative system. Although they had become associated with specific regions by the early 8th century, it took until the end of the 8th century for the civil fiscal administration to begin being organized around them, instead of following the old provincial system. This process, resulting in unified control over both military and civil affairs of each theme by its strategos, was complete by the mid-9th century, and is the “classical” thematic model. Structure of the Themes The term theme was ambiguous, referring both to a form of military tenure and to an administrative division. A theme was an arrangement of plots of land given for farming to the soldiers. The soldiers were still technically a military unit, under the command of a strategos, and they did not own the land they worked, as it was still controlled by the state. Therefore, for its use the soldiers’ pay was reduced. By accepting this proposition, the participants agreed that their descendants would also serve in the military and work in a theme, thus simultaneously reducing the need for unpopular conscription, as well as cheaply maintaining the military. It also allowed for the settling of conquered lands, as there was always a substantial addition made to public lands during a conquest. The commander of a theme, however, did not only command his soldiers. He united the civil and military jurisdictions in the territorial area in question. Thus the division set up by Diocletian between civil governors (praesides) and military commanders (duces) was abolished, and the empire returned to a system much more similar to that of the Republic or the Principate, where provincial governors had also commanded the armies in their area. Consequences of the Theme System Early on, Heraclius had proven himself to be an excellent Emperor—his reorganization of the empire into themes allowed the Byzantines to extract as much as they possibly could to increase their military potential. This became essential after 650, when the Islamic Caliphate was far more resourceful and powerful then the Byzantines were. As a result, a high level of efficiency was needed to combat the Arabs, achieved in part due to the theme system. The Arabs were finally repulsed through the use of Greek fire, flaming projectiles that could burn while floating on water, and thus, could be used for naval warfare. Greek fire was a closely guarded state secret, a secret that has since been lost. The composition of Greek fire remains a matter of speculation and debate, with proposals including combinations of pine resin, naphtha, quicklime, sulfur, or niter. Byzantine use of incendiary mixtures was especially effective, thanks to the use of pressurized nozzles or siphōn to project the liquid onto the enemy. The Arab-Muslim navies eventually adapted to their use. Under constant threat of attack, Constantinople had dropped substantially in size, due to relocation, from 500,000 to 40,000-70,000. By the end of the Heraclian Dynasty in 711 CE, the empire had transformed from the Eastern Roman Empire, with its urbanized, cosmopolitan civilization, to the medieval Byzantine Empire, an agrarian, military-dominated society in a lengthy struggle with the Muslims. The loss of the empire’s richest provinces, coupled with successive invasions, had reduced the imperial economy to a relatively impoverished state, compared to the resources available to the Caliphate. The monetary economy persisted, but the barter economy experienced a revival as well. However, this state was also far more homogeneous than the Eastern Roman Empire; the borders had shrunk, such that many of the Latin-speaking territories were lost and the dynasty was reduced to its mostly Greek-speaking territories. This enabled it to weather these storms and enter a period of stability under the next dynasty, the Isaurian Dynasty. The Isaurian Dynasty The Isaurian Dynasty is characterized by relative political stability, after an important defeat of the Arabs by Leo III, and Iconoclasm, which resulted in considerable internal turmoil. Describe governmental and religious changes that occured during the Isaurian Dynasty - The Isaurian Dynasty, founded by Leo III, was a time of relative stability, compared to the constant warfare against the Arabs that characterized the preceding Heraclian Dynasty. - However, the Bulgars, a nomadic tribe, rose up in Europe and took some Byzantine lands. - The Isaurian Dynasty is chiefly associated with Byzantine Iconoclasm, an attempt to restore divine favor by purifying the Christian faith from excessive adoration of icons, which resulted in considerable internal turmoil. - The Second Arab siege of Constantinople in 717-718 was an unsuccessful offensive by the Muslim Arabs of the Umayyad Caliphate against the capital city of the Byzantine Empire, Constantinople. - The outcome of the siege was of considerable macrohistorical importance; the Byzantine capital’s survival preserved the empire as a bulwark against Islamic expansion into Europe until the 15th century, when it fell to the Ottoman Turks. - By the end of the Isaurian Dynasty in 802 CE, the Byzantines were continuing to fight the Arabs and the Bulgars, and the empire had been reduced from a Mediterranean-wide empire to only Thrace and Asia Minor. - Bulgars: A nomadic tribe related to the Huns; they presented a threat to the Byzantine Empire. - iconoclasm: The deliberate destruction within a culture of the culture’s own religious icons and other symbols or monuments, usually for religious or political motives. It is a frequent component of major political or religious changes. The Byzantine Empire was ruled by the Isaurian or Syrian Dynasty from 717-802. The Isaurian emperors were successful in defending and consolidating the empire against the Caliphate after the onslaught of the early Muslim conquests, but were less successful in Europe, where they suffered setbacks against the Bulgars, had to give up the Exarchate of Ravenna, and lost influence over Italy and the Papacy to the growing power of the Franks. The Isaurian Dynasty is chiefly associated with Byzantine Iconoclasm, an attempt to restore divine favor by purifying the Christian faith from excessive adoration of icons, which resulted in considerable internal turmoil. By the end of the Isaurian Dynasty in 802, the Byzantines were continuing to fight the Arabs and the Bulgars for their very existence, with matters made more complicated when Pope Leo III crowned Charlemagne Imperator Romanorum (“Emperor of the Romans”), which was seen as making the Carolingian Empire the successor to the Roman Empire, or at least the western half. Leo III, who would become the founder of the so-called Isaurian Dynasty, was actually born in Germanikeia in northern Syria c. 685; his alleged origin from Isauria derives from a reference in Theophanes the Confessor, which may be a later addition. After being raised to spatharios by Justinian II, he fought the Arabs in Abasgia, and was appointed as strategos of the Anatolics by Anastasios II. Following the latter’s fall in 716, Leo allied himself with Artabasdos, the general of the Armeniacs, and was proclaimed emperor while two Arab armies campaigned in Asia Minor. Leo averted an attack by Maslamah through clever negotiations, in which he promised to recognize the Caliph ‘s suzerainty. However, on March 25, 717, he entered Constantinople and deposed Theodosios. Leo III’s Rule Having preserved the empire from extinction by the Arabs, Leo proceeded to consolidate its administration, which in the previous years of anarchy had become completely disorganized. In 718, he suppressed a rebellion in Sicily and in 719 did the same on behalf of the deposed Emperor Anastasios II. Leo secured the empire’s frontiers by inviting Slavic settlers into the depopulated districts, and by restoring the army to efficiency; when the Umayyad Caliphate renewed their invasions in 726 and 739, as part of the campaigns of Hisham ibn Abd al-Malik, the Arab forces were decisively beaten, particularly at Akroinon in 740. His military efforts were supplemented by his alliances with the Khazars and the Georgians. Leo undertook a set of civil reforms, including the abolition of the system of prepaying taxes, which had weighed heavily upon the wealthier proprietors; the elevation of the serfs into a class of free tenants; and the remodeling of family, maritime law, and criminal law, notably substituting mutilation for the death penalty in many cases. The new measures, which were embodied in a new code called the Ecloga (Selection), published in 726, met with some opposition on the part of the nobles and higher clergy. The emperor also undertook some reorganization of the theme structure by creating new themata in the Aegean region. The Siege of Constantinople The Second Arab siege of Constantinople in 717-718 was a combined land and sea offensive by the Muslim Arabs of the Umayyad Caliphate against the capital city of the Byzantine Empire, Constantinople. The campaign marked the culmination of twenty years of attacks and progressive Arab occupation of the Byzantine borderlands, while Byzantine strength was sapped by prolonged internal turmoil. In 716, after years of preparations, the Arabs, led by Maslama ibn Abd al-Malik, invaded Byzantine Asia Minor. The Arabs initially hoped to exploit Byzantine civil strife, and made common cause with the general Leo III the Isaurian, who had risen up against Emperor Theodosius III. Leo, however, tricked them and secured the Byzantine throne for himself. After wintering in the western coastlands of Asia Minor, the Arab army crossed into Thrace in early summer 717 and built siege lines to blockade the city, which was protected by the massive Theodosian Walls. The Arab fleet, which accompanied the land army and was meant to complete the city’s blockade by sea, was neutralized soon after its arrival by the Byzantine navy through the use of Greek fire. This allowed Constantinople to be resupplied by sea, while the Arab army was crippled by famine and disease during the unusually hard winter that followed. In spring 718, two Arab fleets sent as reinforcements were destroyed by the Byzantines after their Christian crews defected, and an additional army sent overland through Asia Minor was ambushed and defeated. Coupled with attacks by the Bulgars on their rear, the Arabs were forced to lift the siege on August 15, 718. On its return journey, the Arab fleet was almost completely destroyed by natural disasters and Byzantine attacks. The Arab failure was chiefly logistical, as they were operating too far from their Syrian bases, but the superiority of the Byzantine navy through the use of Greek fire, the strength of Constantinople’s fortifications, and the skill of Leo III in deception and negotiations, also played important roles. The siege’s failure had wide-ranging repercussions. The rescue of Constantinople ensured the continued survival of Byzantium, while the Caliphate’s strategic outlook was altered: although regular attacks on Byzantine territories continued, the goal of outright conquest was abandoned. Historians consider the siege to be one of history’s most important battles, as its failure postponed the Muslim advance into Southeastern Europe for centuries. The Byzantine capital’s survival preserved the empire as a bulwark against Islamic expansion into Europe until the 15th century, when it fell to the Ottoman Turks. Along with the Battle of Tours in 732, the successful defense of Constantinople has been seen as instrumental in stopping Muslim expansion into Europe. Iconoclasm in Byzantium The Byzantine Iconoclasm was the banning of the worship of religious images, a movement that sparked internal turmoil. Understand the reasoning and events that led to iconoclasm - Isaurian Emperor Leo III interpreted his many military failures as a judgment on the empire by God, and decided that it was being judged for the worship of religious images. He banned religious images in about 730 CE, the beginning of the Byzantine Iconoclasm. - At the Council of Hieria in 754 CE, the Church endorsed an iconoclast position and declared image worship to be blasphemy. - At the Second Council of Nicaea in 787 CE, the decrees of the previous iconoclast council were reversed and image worship was restored, marking the end of the First Iconoclasm. - Emperor Leo V instituted a second period of iconoclasm in 814 CE, again possibly motivated by military failures seen as indicators of divine displeasure, but only a few decades later, in 842 CE, icon worship was again reinstated. - iconoclasm: The deliberate destruction within a culture of the culture’s own religious icons and other symbols or monuments. - Council of Hieria: The first church council concerned with religious imagery. On behalf of the church, the council endorsed an iconoclast position and declared image worship to be blasphemy. - Second Council of Nicaea: This council reversed the decrees of the Council of Hieria and restored image worship, marking the end of the First Byzantine Iconoclasm. Iconoclasm, Greek for “image-breaking,” is the deliberate destruction within a culture of the culture’s own religious icons and other symbols or monuments. Iconoclasm is generally motivated by an interpretation of the Ten Commandments that declares the making and worshipping of images, or icons, of holy figures (such as Jesus Christ, the Virgin Mary, and saints) to be idolatry and therefore blasphemy. Most surviving sources concerning the Byzantine Iconoclasm were written by the victors, or the iconodules (people who worship religious images), so it is difficult to obtain an accurate account of events. However, the Byzantine Iconoclasm refers to two periods in the history of the Byzantine Empire when the use of religious images or icons was opposed by religious and imperial authorities. The “First Iconoclasm,” as it is sometimes called, lasted between about 730 CE and 787 CE, during the Isaurian Dynasty. The “Second Iconoclasm” was between 814 CE and 842 CE. The movement was triggered by changes in Orthodox worship that were themselves generated by the major social and political upheavals of the seventh century for the Byzantine Empire. Traditional explanations for Byzantine Iconoclasm have sometimes focused on the importance of Islamic prohibitions against images influencing Byzantine thought. According to Arnold J. Toynbee, for example, it was the prestige of Islamic military successes in the 7th and 8th centuries that motivated Byzantine Christians to adopt the Islamic position of rejecting and destroying idolatrous images. The role of women and monks in supporting the veneration of images has also been asserted. Social and class-based arguments have been put forward, such as the assertion that iconoclasm created political and economic divisions in Byzantine society, and that it was generally supported by the eastern, poorer, non-Greek peoples of the empire who had to constantly deal with Arab raids. On the other hand, the wealthier Greeks of Constantinople, and also the peoples of the Balkan and Italian provinces, strongly opposed iconoclasm. In recent decades in Greece, iconoclasm has become a favorite topic of progressive and Marxist historians and social scientists, who consider it a form of medieval class struggle and have drawn inspiration from it. Re-evaluation of the written and material evidence relating to the period of Byzantine Iconoclasm by scholars, including John Haldon and Leslie Brubaker, has challenged many of the basic assumptions and factual assertions of the traditional account. The First Iconoclasm: Leo III The seventh century had been a period of major crisis for the Byzantine Empire, and believers had begun to lean more heavily on divine support. The use of images of the holy increased in Orthodox worship, and these images increasingly came to be regarded as points of access to the divine. Leo III interpreted his many military failures as a judgment on the empire by God, and decided that they were being judged for their worship of religious images. Emperor Leo III, the founder of the Isaurian Dynasty, and the iconoclasts of the eastern church, banned religious images in about 730 CE, claiming that worshiping them was heresy; this ban continued under his successors. He accompanied the ban with widespread destruction of religious images and persecution of the people who worshipped them. The western church remained firmly in support of the use of images throughout the period, and the whole episode widened the growing divergence between the eastern and western traditions in what was still a unified church, as well as facilitating the reduction or removal of Byzantine political control over parts of Italy. Leo died in 741 CE, and his son and heir, Constantine V, furthered his views until the end of his own rule in 775 CE. In 754 CE, Constantine summoned the first ecumenical council concerned with religious imagery, the Council of Hieria; 340 bishops attended. On behalf of the church, the council endorsed an iconoclast position and declared image worship to be blasphemy. John of Damascus, a Syrian monk living outside Byzantine territory, became a major opponent of iconoclasm through his theological writings. The Brief Return of Icon Worship After the death of Constantine’s son, Leo IV (who ruled from 775 CE-780 CE), his wife, Irene, took power as regent for her son, Constantine VI (who ruled from 780 CE-97 CE). After Leo IV too died, Irene called another ecumenical council, the Second Council of Nicaea, in 787 CE, that reversed the decrees of the previous iconoclast council and restored image worship, marking the end of the First Iconoclasm. This may have been an attempt to soothe the strained relations between Constantinople and Rome. The Second Iconoclasm (814 CE-842 CE) Emperor Leo V the Armenian instituted a second period of Iconoclasm in 814 CE, again possibly motivated by military failures seen as indicators of divine displeasure. The Byzantines had suffered a series of humiliating defeats at the hands of the Bulgarian Khan Krum. It was made official in 815 CE at a meeting of the clergy in the Hagia Sophia. But only a few decades later, in 842 CE, the regent Theodora again reinstated icon worship. The Emperor Irene Irene of Athens, the first woman emperor of the Byzantine Empire, fought for recognition as imperial leader throughout her rule, and is best known for ending the First Iconoclasm in the Eastern Church. Analyze the significance of Emperor Irene - Irene of Athens was an orphan from a noble family, and was married to the son of the current emperor, Leo IV, in 768. - When Leo died in 780, Irene became regent for their nine-year-old son, Constantine, who was too young to rule as emperor, thereby giving her administrative control over the empire. - As imperial regent, Irene subdued rebellions and fought the Arabs with mixed success. She also ended the First Iconoclasm in the Eastern Church. - When Constantine became old enough to become emperor proper, he eventually rebelled against Irene, although he let her keep the title of empress. - Soon after, Irene organized her own rebellion and eventually killed her son, thereby claiming sole rulership over the empire as empress, the first woman to have that title in the empire. - Although it is often asserted that, as monarch, Irene called herself “emperor” rather than “empress,” in fact she used “empress” in most of her documents, coins, and seals. - The pope would not recognize a woman as ruler, and in 800, crowned Charlemagne as imperial ruler over the entire Roman territory, including Byzantium. - Charlemagne did not attempt to rule Byzantium, but relations between the two empires remained difficult. - Irene was eventually deposed by her finance minister. - regent: A person appointed to administer a state because the monarch is a minor, is absent, or is incapacitated. - strategos: A military governor in the Byzantine Empire. - Iconoclasm: The destruction of religious icons, and other images or monuments, for religious or political motives. Irene of Athens (c. 752-803 CE) was Byzantine empress from 797 to 802. Before that, Irene was empress consort from 775 to 780, and empress dowager and regent from 780 to 797. She is best known for ending iconoclasm. Irene was related to the noble Greek Sarantapechos family of Athens. Although she was an orphan, her uncle or cousin, Constantine Sarantapechos, was a patrician and was possibly the strategos of the theme of Hellas at the end of the 8th century. She was brought to Constantinople by Emperor Constantine V on November 1, 768, and was married to his son, Leo IV, on December 17. On 14 January 771, Irene gave birth to a son, the future Constantine VI. When Constantine V died in September 775, Leo succeeded to the throne at the age of twenty-five years. Leo, though an iconoclast, pursued a policy of moderation towards iconodules, but his policies became much harsher in August 780, when a number of courtiers were punished for venerating icons. According to tradition, he discovered icons concealed among Irene’s possessions and refused to share the marriage bed with her thereafter. Nevertheless, when Leo died on September 8, 780, Irene became regent for their nine-year-old son, Constantine, thereby giving her administrative control over the empire. Irene was almost immediately confronted with a conspiracy that tried to raise Caesar Nikephoros, a half-brother of Leo IV, to the throne. To overcome this challenge, she had Nikephoros and his co-conspirators ordained as priests, a status which disqualified them from ruling. As early as 781, Irene began to seek a closer relationship with the Carolingian Dynasty and the Papacy in Rome. She negotiated a marriage between her son, Constantine, and Rotrude, a daughter of Charlemagne by his third wife, Hildegard. During this time, Charlemagne was at war with the Saxons, and would later become the new king of the Franks. Irene went as far as to send an official to instruct the Frankish princess in Greek; however, Irene herself broke off the engagement in 787, against her son’s wishes. Irene next had to subdue a rebellion led by Elpidius, the strategos of Sicily. Irene sent a fleet, which succeeded in defeating the Sicilians. Elpidius fled to Africa, where he defected to the Abbasid Caliphate. After the success of Constantine V’s general, Michael Lachanodrakon, who foiled an Abbasid attack on the eastern frontiers, a huge Abbasid army under Harun al-Rashid invaded Anatolia in summer 782. The strategos of the Bucellarian Theme, Tatzates, defected to the Abbasids, and Irene, in exchange for a three-year truce, had to agree to pay an annual tribute of 70,000 or 90,000 dinars to the Abbasids, give them 10,000 silk garments, and provide them with guides, provisions, and access to markets during their withdrawal. Irene’s most notable act was the restoration of the veneration of icons, thereby ending the First Iconoclasm of the Eastern Church. Having chosen Tarasios, one of her partisans and her former secretary, as Patriarch of Constantinople in 784, she summoned two church councils. The first of these, held in 786 at Constantinople, was frustrated by the opposition of the iconoclast soldiers. The second, convened at Nicaea in 787, formally revived the veneration of icons and reunited the Eastern Church with that of Rome. While this greatly improved relations with the Papacy, it did not prevent the outbreak of a war with the Franks, who took over Istria and Benevento in 788. In spite of these reverses, Irene’s military efforts met with some success: in 782 her favored courtier, Staurakios, subdued the Slavs of the Balkans and laid the foundations of Byzantine expansion and re-Hellenization in the area. Nevertheless, Irene was constantly harried by the Abbasids, and in 782 and 798, had to accept the terms of the respective Caliphs Al-Mahdi and Harun al-Rashid. Rule as Empress As Constantine approached maturity, he began to grow restless under her autocratic sway. An attempt to free himself by force was met and crushed by the empress, who demanded that the oath of fidelity should thenceforward be taken in her name alone. The discontent that this occasioned swelled in 790 into open resistance, and the soldiers, headed by the army of the Armeniacs, formally proclaimed Constantine VI as the sole ruler. A hollow semblance of friendship was maintained between Constantine and Irene, whose title of empress was confirmed in 792; however, the rival factions remained, and in 797, Irene, by cunning intrigues with the bishops and courtiers, organized a conspiracy on her own behalf. Constantine could only flee for aid to the provinces, but even there participants in the plot surrounded him. Seized by his attendants on the Asiatic shore of the Bosphorus, Constantine was carried back to the palace at Constantinople. His eyes were gouged out, and according to most contemporary accounts, he died from his wounds a few days later, leaving Irene to be crowned as first empress regnant of Constantinople. As empress, Irene made determined efforts to stamp out iconoclasm everywhere in the empire, including within the ranks of the army. During Irene’s reign, the Arabs were continuing to raid into and despoil the small farms of the Anatolian section of the empire. These small farmers of Anatolia owed a military obligation to the Byzantine throne. Indeed, the Byzantine army and the defense of the empire was largely based on this obligation and the Anatolian farmers. The iconodule (icon worship) policy drove these farmers out of the army, and thus off their farms. Thus, the army was weakened and was unable to protect Anatolia from the Arab raids. Many of the remaining farmers of Anatolia were driven from the farm to settle in the city of Byzantium, further reducing the army’s ability to raise soldiers. Additionally, the abandoned farms fell from the tax rolls and reduced the amount of income that the government received. These farms were taken over by the largest land owner in the Byzantine Empire, the monasteries. To make the situation even worse, Irene had exempted all monasteries from all taxation. Given the financial ruin into which the empire was headed, it was no wonder, then, that Irene was, eventually, deposed by her own minister of finance. The leader of this successful revolt against Irene replaced her on the Byzantine throne under the name Nicephorus I. Although it is often asserted that, as monarch, Irene called herself “basileus” (emperor), rather than “basilissa” (empress), in fact there are only three instances where it is known that she used the title “basileus“: two legal documents in which she signed herself as “Emperor of the Romans,” and a gold coin of hers found in Sicily bearing the title of “basileus.” She used the title “basilissa” in all other documents, coins, and seals. Relationship with the Carolingian Empire Irene’s unprecedented position as an empress ruling in her own right was emphasized by the coincidental rise of the Carolingian Empire in western Europe, which rivaled Irene’s Byzantium in size and power. In 800, Charlemagne was crowned emperor by Pope Leo III, on Christmas Day. The clergy and nobles attending the ceremony proclaimed Charlemagne as “Emperor of the Roman Empire.” In support of Charlemagne’s coronation, some argued that the imperial position was actually vacant, deeming a woman unfit to be emperor. However, Charlemagne made no claim to the Byzantine Empire. Relations between the two empires remained difficult.
Video #1 Brain Breaks – Uppers and Downers Brain breaks are short and simple physical and/or mental exercises designed to manage the physiology and attention of a group and to keep learners in the most receptive state possible for further learning. In this video we explore the benefits of brain breaks to foster calm and self-management and boost active learning engagement. Brain Breaks can either stimulate the nervous system and function as “uppers” OR activate the relaxation response and be calming “downers”. Any brain break that uses repetitive patterned movement, focus and mindfulness, stretching or deep breathing will be calming. While brain breaks virtually no preparation or extra materials to perform, it is important to establish norms for a clear and smooth transition back into engaged learning. Peer Collaboration Questions Additional reading and resources
Loud noise is one of the most common causes of hearing loss. An estimated 26 million Americans between the ages of 20 and 69 already have irreversible hearing loss caused by loud sounds. And up to 16% of teens have hearing loss that may have been caused by loud noise. For adolescents, music players with headphones or earbuds are a common source of noise exposure. How Noise Damages Hearing “Noise damage can begin at any age, and it tends to accumulate over time. That’s why avoiding excess noise is so critical,” says Dr. Gordon Hughes, a clinical trials director and ear, nose, and throat specialist at NIH. “Hearing loss caused by noise is completely preventable.” Noise-related hearing loss can arise from extremely loud bursts of sound, such as gunshots or explosions, which can rupture the eardrum or damage the bones in the middle ear. This kind of hearing loss can be immediate and permanent. But most noise-related hearing problems develop slowly over time, with ongoing exposure to loud sounds. Loud noises can injure the delicate sensory cells—known as hair cells—in the inner ear. Hair cells help to convert sound vibrations into electrical signals that travel along nerves from the ear to the brain. These cells allow us to detect sounds. But when hair cells are damaged and then destroyed by too much noise, they don’t grow back. So hearing is permanently harmed. How Loud is too Loud? Sound is measured in units called decibels (dB). Sounds less than 75 dB are unlikely to harm hearing. Normal conversation, for instance, measures about 60 dB. A typical hair blow dryer has an intensity of about 85 dB, but if they’re used for just brief periods, they’re unlikely to damage hearing. However, long or repeated exposure to sounds at or above 85 dB can cause problems. The louder the sound, the quicker the damage. “At maximum volume, an audio player with earbuds might produce 105 dB. There’s potential for noise damage to occur at barely 30 minutes of exposure,” SOURCE:NIH News in Health
Developing Global Competencies in Mathematics Using Loose Parts Using loose parts to teach mathematics to kindergarten students through inquiry to promote creativity, critical thinking and global competencies. Keyword(s):critical thinking, global competencies, inquiry, loose parts Spatial Sense and Visualization in Mathematics in the Junior Grades Capturing the Process of Healthy Active Living Education Learners Making Their Thinking Visible To foster a culture of thinking in Healthy Active Living Education classes by implementing Visible Thinking approaches into our pedagogical practices using technology to collect evidence of understanding through observation, conversation and product. Authentic Communication in French Immersion Kindergarten Through Math Games Focus on mathematical understanding should not exclude French Immersion students simply due to lack of expressive vocabulary. Students can learn, and do strive, in authentic communication situations, such as playing math games in the early years. Creating a STEAM Lab Using inspiration from the Peel District School Board document, Empowering Modern Learners, to create a portable STEAM Lab at Meadowvale Village Public School.
Climate change and global warming are two of the most pressing issues of our time, and we all must do our part to reduce emissions. In this article, we take a look at the biggest industrial emitters of greenhouse gases and what steps can be taken to reduce their emissions. Find out how you can make a difference by reading on! Introduction to Greenhouse Gas Emissions Greenhouse gas emissions are gases that trap heat in the atmosphere. They are emitted from a variety of sources, including power plants, automobiles, agriculture, and landfills. Greenhouse gas emissions can come from both natural and human-made sources. The most common greenhouse gases are carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and fluorinated gases. CO2 is released when fossil fuels such as coal and oil are burned. Methane is emitted by livestock and landfills, while nitrous oxide comes from agricultural activities. Fluorinated gases are used in a variety of industrial processes and can be found in refrigerants and aerosol cans. Humans have significantly increased the levels of greenhouse gases in the atmosphere since the Industrial Revolution. The burning of fossil fuels has been the main driver of this increase. As a result, global temperatures have risen by about 1 degree Celsius since pre-industrial times. This may not sound like much, but it has already had a profound impact on our climate, resulting in more extreme weather events and rising sea levels. There are several ways to reduce greenhouse gas emissions. Switching to renewable energy sources, such as solar and wind power, is one way to do this. Reducing our reliance on fossil fuels will help to reduce emissions over time. Improving energy efficiency in our homes and businesses is another way to reduce emissions. Greenhouse Gas Emitters by Industry Greenhouse gas emissions in the industry come from a variety of sources. The major sources are: The Top 22 Greenhouse Gas Emitters by Country How to Reduce Greenhouse Gas Emissions in Industry To reduce greenhouse gas emissions in industry, some things can be done. First, energy efficiency can be improved. This can be done by using better insulation, using more efficient equipment and lighting, and improving the overall design of facilities. Second, the use of renewable energy sources can be increased. This can be done by installing solar panels or wind turbines, or by using biomass to generate electricity or heat. Third, waste management practices can be improved. This can include recycling and composting, as well as reducing the amount of waste that is produced in the first place. Finally, companies can offset their emissions by investing in projects that reduce greenhouse gases elsewhere, such as planting trees or investing in clean energy technologies. What Does the Future Hold for Industrial Emissions? The future of industrial emissions is fraught with uncertainty. The Paris Agreement, which was ratified in 2016, set the goal of limiting global temperature rise to 2°C above pre-industrial levels. However, major economies have been slow to take action on reducing emissions, and current pledges are not enough to meet this target. As a result, there is a growing risk that the world will warm by more than 2°C by the end of the century. This would have catastrophic consequences for humanity and the natural world. There is still time to avoid this fate, but it will require a radical transformation of the global economy. Major emitting industries will need to switch to low-carbon technologies and processes, and governments must put in place policies that incentivize this transition. The future of industrial emissions is therefore uncertain but critical. The decisions we make today will determine whether we can avert climate disaster or condemn future generations to an increasingly uninhabitable planet. The top 10 greenhouse gas emitters in the industry account for a large portion of global emissions. Steps must be taken to reduce the amount of these gases released into our atmosphere, and we need to start by understanding where they are coming from and how we can prevent them. By educating ourselves on the sources of industrial pollution and taking measures to reduce it, we can make a positive impact in reducing the effects of climate change.
Who gave the concept of time element in price determination? Professor Marshall has explained the importance role of time elements in price determination. The supply of a commodity cannot be adjusted to the quantity demanded because time is the most important constraint. The scale of production, size of firm, supply of raw material and other factors of production can be changed only when there is sufficient time. Generally, the price of a commodity is determined by economic forces of demand and supply. Equilibrium price will be at that point where the demand curve intersects the supply curve. Price changes due to changes in the quantity demanded and supplied during a given period of time. Role of Time Element in Price Determination Time can be classified into four categories as given below: Very Short Period Market A very short period market is a period in which the supply of a product or commodity is limited, to the available stock. In other words, the supply of the commodity cannot be increased. The time is so short that the supply of the commodity is equal to its stock available. If the commodity is perishable its supply will be perfectly inelastic because it cannot be stored and the total quantity cannot be changed but in the case of durable a part of it can be stored for some time by the producer or seller. Demand plays a dominant role in price determination during this period because supply is a passive factor. The price is determined by the interaction of the demand curve and supply curve and equilibrium price is called market price. There is a direct relationship between demand and the price of a commodity during a very short period. Price and output are shown on the oy-axis and ox-axis respectively. SS is the supply curve and DD is the demand curve. SS is a perfectly inelastic supply curve. The initial price was OP and the output was OS. When the demand increases the demand curve shift upward and the new demand curve is D1D1. The price is determined at E1 point where the price is op1 and the output remains constant. When demand decreases the demand curve shift downwards and it becomes D2D2 and the new price equilibrium is at point E2 where the price is OP2 and output remains the same. Thus, in the case of perishable goods, there is a direct relationship between the change in quantity demanded and the change in price as has been explained in the diagram. Short Period Market During a short period, the supply of a commodity can be adjusted to its quantity demanded to the extent which the installed capacity of a plant has not been fully utilized. A producer can increase the supply by the maximum utilization of capacity and resources available. During this period price is determined by the demand and supply of a commodity but demand is more powerful than the supply. The equilibrium price is called the short-run price. The following diagram shows the price determination during a short period: The diagram shows that the initial equilibrium is at E where the price is OP and output is OQ. When demand increases the new demand curve (D1D1) will intersect SS at El point and the price will be OP1 and the quantity demanded will be OQ1. When demand decreases the demand curve will shift downward to D2D2 and E2 price will be determined. The price will be OP2 and the amount demanded will be OQ2 only. Thus, during a short period supply of a commodity can be increased and it is elastic. However, the demand affects the price and there is a direct relationship between the demand of a commodity and its price during a short period. Thus on the basis of the short-run price determination, we can come to the following conclusions: - Short-run price is affected by demand and supply both. But demand has more influence than supply. - Price increases with the increase in demand during a short period but the increase in price is lesser than that of a very short period during the short period. Long Period Market During this period producer of a commodity has sufficient time. All the factors of production are variable and even the scale of production can be changed. The supply of a commodity can be adjusted to its demand during this period. Firms take into consideration the concept of the total cost. During this period equilibrium is attained which is called long-run normal price. It can be shown from the following diagram: Price and output are shown on oy-axis and ox-axis respectively. Price is determined at Point E where the supply curve (SS) intersects the demand curve (DD). Price is OP and output is OQ. At this price demand and supply of a commodity are equal. During this period the supply plays an important role in price determination because the producer is in a position to adjust its supply according to its demand. During the long period with the increase in demand for a commodity, its price will increase but this increase in price will be less than very short period and short period prices as shown in the diagrams. Very Long Period Market It is the aggregate of various long periods. The period is very lengthy. Determinants of demand and supply of a commodity go under change over a very long period. Size of population, sources and supply of raw materials, techniques of production, supply of capital, habit, fashion and tastes of consumers etc. undergo a rapid change over a very long period. It is very difficult to know which type of changes will take place during this period. Hence price determination is not studied and analyzed during this period. This period has theoretical importance only. FAQ Related to Role of Time Element in Price Determination What is very long period market? A very short period market is a period in which the supply of a product or commodity is limited, to the available stock. What are the 4 types of markets? Role of Time Element in Price Determination: Time can be classified into four categories as given below: Very Short Period Market, Short Period Market, Long-Period Market, and Very Long Period Market. Who gave the concept of the time element in price determination? The supply of a commodity cannot be adjusted to the quantity demanded because time is the most important constraint. The scale of production, size of the firm, supply of raw material and other factors of production can be changed only when there is sufficient time.
Often regarded as the cornerstone of American democracy, the right to due process is inextricably linked to the fundamental principles that uphold the rule of law and ensure equal protection before the law. By examining the historical and philosophical underpinnings of the right, one can appreciate the vital role it plays in safeguarding the rights and liberties of every individual. This article will delve into the various aspects of the right to due process, highlighting its historical relationship with the United States, the key distinctions between substantive and procedural due process, and the significance of the due process clause in protecting individuals from arbitrary government action. So, without further ado, let’s discuss this integral American principle and take a closer look into its significance and meaning. The history and evolution of the right to due process in the United States The history of due process stretches back centuries, with its roots lying in the English common law system. However, the evolution of due process in the United States is particularly fascinating, as it reflects the American principles of individual rights and liberties embedded in the country’s founding documents. As the United States legal system began to evolve in the late 19th century, the concept of due process was adopted from English common law principles and enshrined in both the Fifth and Fourteenth Amendments to the Constitution. These amendments, which expanded the constitutional rights of Americans by extending the right to due process to state and federal court proceedings, played a pivotal role in shaping the American justice system. Early interpretations of due process primarily focused on procedural aspects, such as ensuring fair trials and maintaining a strict separation of powers between the government branches. However, as notions of justice and American principles progressed over the centuries, the scope of due process protections expanded to cover substantive rights such as freedom of speech, freedom of religion, and the right to privacy. Ultimately, the history and evolution of due process in the United States reveal a tireless commitment to safeguarding individual liberties and promoting a just and equitable legal framework. The role of the due process clause in protecting individuals from arbitrary government action The due process clause in the United States Constitution, found in both the Fifth and Fourteenth Amendments, serves as a crucial instrument in protecting individuals from arbitrary government action by requiring the government to adhere to legal principles and established procedures when depriving someone of life, liberty, or property. As such, the clause is a vital constitutional safeguard against governmental abuse of power, upholding the rights of individuals and promoting transparency and accountability in the legal process. In essence, the due process clause ensures that individuals receive the requisite protections in both criminal and civil proceedings and are not subjected to arbitrary or unfair treatment by government actors. As a result, the due process clause is inextricably tied to the legal principles that uphold fundamental fairness in the United States legal system and serves as a bulwark against rights violations and arbitrary government action. Importantly, the due process clause addresses not only the procedural aspects of the legal process but also the substantive rights that underpin the American justice system. This dual approach to rights protection ensures that the due process clause remains responsive and flexible, capable of adapting to evolving legal standards and preserving the delicate balance between safeguards for individual rights and the imperatives of the justice system. The difference between substantive and procedural due process Substantive due process and procedural due process are two legal distinctions that, together, provide a comprehensive framework for understanding the full scope of the right to due process in the United States legal system. While they share a common goal of protecting individual rights and promoting fairness, they serve distinct functions and operate within different contexts. Procedural due process refers to the rules and procedures that the government must adhere to when enforcing the law, particularly when acting to deprive an individual of life, liberty, or property. This aspect of due process ensures that individuals receive notice of legal proceedings, have the opportunity to be heard, and are treated impartially by government actors. Procedural due process is crucial in maintaining the legitimacy of the justice system and ensuring that individuals receive fair and objective treatment in accordance with established legal procedures. On the other hand, substantive due process involves the broader protections afforded by the Constitution to certain fundamental rights. This aspect of due process rights encompasses the legal norms and principles that govern the actual substance of the law, rather than the specific processes involved in its enforcement. Substantive due process, therefore, serves to protect individuals from laws that infringe upon their constitutionally protected liberties, such as the freedoms guaranteed in the Bill of Rights. In this regard, substantive due process encompasses principles such as the right to privacy, equal protection under the law, and the right to vote, which are essential to maintaining the core American values of individual liberty, democracy, and fundamental fairness. In conclusion, the right to due process is a crucial component of the United States legal system and a bedrock principle of American democracy. Through its various manifestations, such as the due process clause and the distinction between substantive and procedural due process rights, the right to due process acts as a powerful safeguard against arbitrary government action and as a guarantor of individual rights, liberty, and justice for all. The right to due process is a fundamental principle in any democratic society, ensuring that every individual’s basic rights are protected during legal proceedings. This bedrock of fairness guarantees that justice is served, and citizens can have faith in their judicial systems. Throughout this article, we will delve into the various aspects of the right to due process, including the right to notice and an opportunity to be heard in legal proceedings, the right to a fair and impartial trial, and the right to confront witnesses and present evidence in one’s defense. The right to notice and an opportunity to be heard in legal proceedings At the heart of due process rights are the notice requirement and the opportunity to be heard in legal proceedings. Essentially, this means that individuals must be notified of any actions taken against them and have the chance to present their side of the story. This procedural protection is integral to upholding fundamental fairness in the judicial system and ensuring that no one is punished without a fair hearing. Notice is a critical aspect of legal proceedings, as it provides individuals with enough information to prepare their defense and respond to the allegations against them. The notice requirement ensures that the accused is aware of the charges, the jurisdiction, and the time and place of the proceedings. Failure to provide adequate notice can result in a violation of one’s due process rights. Having an opportunity to be heard is another essential component of due process rights. This means that the accused has the right to present their defense, including testimony and evidence, during a trial or hearing. By ensuring that each party has a fair chance to make their case, the justice system promotes procedural protection and fundamental fairness in all legal proceedings. The right to a fair and impartial trial, including the right to an attorney and the right to a jury trial Another cornerstone of due process rights is the guarantee of a fair and impartial trial. This means that all parties involved in a legal proceeding must be treated fairly, and the judge or jury must remain neutral and unbiased throughout the process. Due process guarantees also extend to the right to an attorney, ensuring that individuals have access to legal representation during judicial proceedings. The right to a jury trial is a fundamental aspect of due process rights found in many democratic societies. By allowing a group of one’s peers to determine the outcome of a case, the justice system ensures that impartiality and fairness guide the judicial process. This element of due process has deep roots in history and continues to play a vital role in the administration of justice today. Legal representation is an indispensable part of due process rights. Individuals may choose to represent themselves, but the right to an attorney ensures that those who seek legal counsel have access to knowledgeable professionals who can navigate complex legal systems. In some cases, legal representation may even be provided at the state’s expense to ensure that every person has the opportunity for a fair trial. The right to confront witnesses and present evidence in one’s defense The ability to confront witnesses and present evidence in one’s defense is a critical aspect of due process rights. This ensures that individuals can challenge the allegations and evidence against them, as well as present their own testimony and exhibits in court. As part of the adversarial process, the defense strategy has a crucial role in ensuring a fair and transparent trial. Confronting witnesses is a vital right, as it allows the accused to cross-examine those who testify against them. This process helps to reveal any inconsistencies in the witness’s testimony and can lead to the discovery of crucial information about the case. By holding witnesses accountable for their statements, the justice system ensures that the adversarial process remains focused on uncovering the truth. The ability to present evidence is another essential component of due process rights. In addition to cross-examining witnesses, the defense strategy can involve the presentation of evidence such as documents, photographs, or other exhibits. This allows the accused to support their claims and challenge the evidence against them, ensuring that their side of the story is heard during the trial process. In conclusion, the right to due process is a fundamental aspect of any fair and democratic society. From the notice requirement and the opportunity to be heard, to the right to a fair trial and the right to confront witnesses, these basic protections ensure that our legal systems work to preserve the rights of individuals and uphold the principle of justice for all. The right to due process is a fundamental principle deeply embedded in the fabric of any democratic society. It refers to the idea that each individual is entitled to a fair and impartial hearing before any legal deprivation of life, liberty, or property. These rights are enshrined in the United States Constitution and form the cornerstone of the American justice system. In the following paragraphs, we will take a closer look at various elements of the right to due process, such as protection from unreasonable searches and seizures, self-incrimination, double jeopardy, and the right to appeal. The right to be free from unreasonable searches and seizures. The Fourth Amendment of the United States Constitution protects individuals against unreasonable searches and seizures. This amendment safeguards personal privacy by setting forth certain standards that need to be satisfied before government agents can conduct a search or make an arrest. In order to execute a search, law enforcement officers must have probable cause and typically obtain a court-issued warrant. The warrant requirement is a crucial aspect of the Fourth Amendment, as judges thoroughly examine the evidence before granting a warrant. Warrantless searches are generally not permissible, with a few exceptions such as exigent circumstances or an individual’s consent. By imposing these limitations on law enforcement, the Fourth Amendment protects our privacy rights and prevents the kind of arbitrary government intrusion that was prevalent before the founding of the United States. Throughout the years, the interpretation of the Fourth Amendment has evolved to adapt to new contexts, such as advanced technology and changing social norms. However, the core principle of seizures protections remains relevant and aims to strike a balance between individual liberties and the need for effective law enforcement. The right to be free from self incrimination and the privilege against self incrimination. The Fifth Amendment of the United States Constitution deals with the right to be free from self incrimination. Specifically, it establishes the privilege against forcing an individual to be a witness against themselves in a criminal case. This clause prevents law enforcement and prosecutors from compelling a person to provide evidence that could be used against them in court. As a result of the Fifth Amendment protection, the police must inform arrested individuals of their so-called Miranda rights. These rights include remaining silent, having an attorney present during questioning, and the right to consult with an attorney before answering any questions. By ensuring that suspects are aware of these rights, the courts help guarantee that any confession admissibility in court will not be the product of coercion. The silence protection provided by the Fifth Amendment also applies during a trial, implying that the accused person does not have to testify or offer any statement in their defense. This presumption of innocence creates a higher burden for the prosecution to prove guilt beyond reasonable doubt. The right to be free from double jeopardy and the protection against double jeopardy. Double jeopardy is another important aspect of the right to due process, enshrined in the Fifth Amendment. In essence, it protects individuals from being charged and tried multiple times for the same crime. The protection against double jeopardy means that once an individual has been acquitted or convicted, the case cannot be retried later, regardless of new evidence or legal interpretation. Several factors contribute to the concept of retrial restrictions. For example, cases may not be retried after a court has rendered a judgement of acquittal finality. Additionally, multiple prosecutions are allowed when they are based on different facts from the original trial or when the scope of a defendant’s crime is expanded. The double jeopardy rule emphasizes the finality of judicial decisions in criminal prosecution and reinforces the stability of our legal system. Consequently, it serves as a crucial aspect of due process that confers fairness and predictability to the accused. The right to appeal and seek post conviction relief. Lastly, the right to due process entails the right to appeal and seek post conviction relief. Individuals should have the opportunity to challenge a wrongful conviction or a severe sentence. This can be achieved through the legal mechanism of appealing a case to higher courts, where appellate judges can review the original trial’s proceedings for any legal errors or misconduct. In the United States, convicted individuals have several legal avenues for post-conviction relief, such as filing a motion for a new trial or applying for a writ of habeas corpus, which investigates if a person’s detention is lawful. Post-conviction relief aims to rectify potential injustices caused by human error or biased decision-making during the trial process. In some cases, the defendant may claim ineffective assistance of counsel, alleging that their lawyer failed to provide adequate representation during the trial. In these situations, post-conviction relief can offer a second chance for the defendant to receive a fair hearing and safeguard their right to due process. In conclusion, the right to due process affirms that individuals are entitled to fairness, impartiality, and respect when dealing with the legal system. By exploring the specific rights, such as protection from unreasonable searches and seizures, self-incrimination, double jeopardy, and the right to appeal, we can better appreciate the fundamental role due process plays in safeguarding our liberties and promoting an equitable rule of law. Frequently Asked Questions about Right to Due Process 3. What is the difference between substantive and procedural due process? Substantive due process refers to the specific rights protected by the Constitution that ensure a fair and just outcome of any legal proceeding. It guarantees that laws enacted by the government do not infringe upon any of these fundamental rights. For example, a person cannot be deprived of their life, liberty, or property without a sufficient legal basis. Procedural due process, on the other hand, ensures that the government follows fair and consistent procedures when enforcing laws. This guarantees that all parties involved in a legal proceeding have the right to be heard, receive notice of the proceeding, and have access to an impartial decision maker. It aims to provide a fair trial and hearing process, ensuring no one is disadvantaged due to arbitrary procedures. 4. How does due process relate to the Fourth, Fifth, and Sixth Amendments to the U.S. Constitution? Due process rights are upheld and protected by the Fourth, Fifth, and Sixth Amendments to the U.S. Constitution. The Fourth Amendment safeguards against unreasonable searches and seizures, as well as the requirement of probable cause to issue a search warrant. This ensures that an individual’s right to privacy is respected and that law enforcement cannot conduct searches without sufficient legal justification. The Fifth Amendment contains various provisions related to due process, such as the requirement that an individual cannot be tried twice for the same crime (double jeopardy) and the protection against self-incrimination. The most significant aspect, however, is the requirement that no one is deprived of life, liberty, or property without due process of law. The Sixth Amendment further enhances due process by guaranteeing the right to a speedy and public trial, an impartial jury, and the right to confront witnesses and have the assistance of counsel in criminal cases. Together, these amendments work to create a fair and consistent legal system. 5. Can due process be limited in times of national emergency? In times of national emergency or crisis, governments may restrict certain rights in a bid to preserve security and stability. However, any limitations on due process must be lawful, necessary, and proportional, and respect the fundamental rights of individuals. It is essential that these restrictions are temporary and rooted in a legitimate purpose. Various Supreme Court cases have upheld that the government may apply limitations on due process during a crisis, such as during wartime or significant threats to national security. Nonetheless, these cases have also emphasized that any restrictions must comply with the Constitution and existing laws to ensure accountability and prevent abuse of power. 6. How does the right to due process impact the criminal justice system? The right to due process has a profound impact on the criminal justice system by ensuring that individuals accused of a crime are treated fairly and consistently throughout the entire process, from investigation to conviction. It guarantees that defendants have access to an impartial trial, the right to be presumed innocent until proven guilty, competent legal representation, and the opportunity to present evidence and confront witnesses. Due process also promotes the transparency and integrity of the criminal justice system by requiring that law enforcement and the courts adhere to established procedures and respect fundamental rights. This not only helps protect innocent individuals but also strengthens public confidence in the legal system and its ability to deliver justice.
HGH, or the human growth hormone, is still something fairly new for many people, and most people are not very familiar with it still. But, this is something that we should not be neglected at any cost. Here, HGH stands for human growth hormone deficiency. This HGH for kids is a very common thing these days. There are many parents out there who are seen worried and looking around for help over this. But to get help for this, it is very essential for people to, first of all, have enough knowledge about this hormone. So, here in the guide, as per the demand of the topic, we will try and cover all the informational aspects associated with the HGH for kids. What is the assigned task for HGH? Well, going by the name HGH or human growth hormone, we can say that the assigned task for this is to help the human body in its physical growth and development. It is through the secretion of HGH for kids that they keep on growing further as the hormone keeps secreting from the pituitary glands, which are there between the lobes of the brain. HGH deficiency in kids Well, it is often seen pituitary glands which are present between the brain lobes cannot able to produce enough HGH for kids. This is where the deficiency of the hormone gets started in the kids. Here, further, we will see the different types of HGH deficiencies that the kids get to face. Also, as we will move further in the guide, we will see the ways and means for identifying the deficiency of the hormone in the kids, and also we will see the steps one can take after identifying it. Types of HGH deficiency Well, there are basically two types of deficiencies that are very commonly seen when it comes to HGH levels in kids. These two types of deficiencies include the following – - By birth growth hormone deficiency- If such is the case, then that means a person is born with it. The kids who are born with the HGH deficiency are also at the risk of deficiency in other hormones. In babies, it becomes difficult to get hold of this problem. One may only be able to notice it in them once they are 6 to 12 months old. - Acquired growth hormone deficiency– This is the condition where HGH for kids stops producing in the body, or its production levels are snatched down. This condition can crop up at any given point of time in childhood days. How to identify HGH deficiency through symptoms - Late puberty hitting - Delayed teeth growth - • Weak muscles - Small-sized penis during birth - Lower levels of blood sugar What may lead to HGH deficiency in kids? Well, none can give one specific reason as such that one can give in case of deficiency of HGH for kids. But, still, in some cases, when the doctor is actually able to identify the reason, it mostly turns out as some issue with the pituitary gland, or it can be some problem in the brain just around the same gland. Some main reasons for this deficiency may include - Injuries in the head - Brain tumor - It can also be because of some radiation treatment taken Some medically proven ways for diagnosing growth hormone deficiency in kids- - Blood Tests- As such, there are no specific tests that can help in identifying the exact levels of growth hormone. It is so because the growth hormones are made out of short bursts, and this can be a thing overnight. So, when it comes to a blood test for checking HGH in kids, the doctors are seen focusing on two levels of protein which helps in better identification and diagnosis. - X-Ray for bone age- In this case, to check the level of HGH for kids’ doctors do conduct X-rays of the hand or of the Wrist. The X-ray report is then compared to the X-Ray reports of other children. If in case the bone age of the child turns out to be less than his or her actual age, then this can be because of a growth hormone deficiency. - Stimulation Test- This test is given consideration when the other test reports show some signs of growth hormone deficiency. For the conduct of this test, the child will have to fast for a few hours. The child should not be allowed to eat or drink anything for a while. In most cases, this has to be done overnight. Your child will then feed on a medicine given by the doctor for the production of growth hormones in the body. After that, at regular intervals of time, blood samples will be taken to check the levels of HGH in the kids. - Brain MRI– In this, a good and a very clear picture of the brain is taken. Through this picture, the doctors will be able to identify the exact issue in the pituitary gland. What is the process to treat HGH deficiency? If you need to treat deficient levels of HGH in your kids, then the best option are daily HGH shots. You, as a parent, can learn the process of providing your child with the shot. Once after learned the process, you will be able to give a shot to your child on your own from the comfort of your home. You will not have to visit the doctor daily. So, here you can call this a general comprehensive guide for understanding each and everything related to the levels of HGH in kids. Once you complete reading this guide, you will be able to understand and will be able to keep your child in good health. This guide will surely help you in easy maintenance of HGH levels in your kid if he or she is suffering from some deficiency in the hormone. Yes, this is a serious issue for your child but treating this in the right way is all that you need. Iskra Banović is our seasoned Editor-in-Chief at BlueFashion. She has been steering the website’s content and editorial direction since 2013. With a rich background in fashion design, Iskra’s expertise spans across fashion, interior design, beauty, lifestyle, travel, and culture.
Promoting British Values The issues of radicalisation and extremism have become significant concerns in our country in recent times and schools are a key focus in addressing this. The Government expects all schools and academies to actively promote the Prevent Duty (see below) and to ensure that all our children know and embrace ‘fundamental British values’. Ofsted say that these fundamental British values are: - the rule of law. - individual liberty. - mutual respect for and tolerance of those with different faiths, beliefs and those of no faith. As a church school we believe that the Christian understanding of compassion, acceptance of all, obedience and service are fully in line with these values. As such, we believe that they have been part of life in our school for many years – we wish to continue the good work that has been done in the past and build on it as we would in all areas of PSHE (Personal, Social and Health Education); SMSC (Social, Moral Spiritual and Cultural) and safeguarding. A large amount of the discussion on British Values has centred around issues of nationality, religion and culture. At Geddington we recognise their importance but also wish our children to value diversity in all its forms including sexuality, disability, gender and age. British Values in EYFS/Reception: Right from the start of their time in school, children are taught about the importance of respecting each other, sharing, making choices, rules and accepting the consequences of their behaviour. Some of the ways we promote this include: - Rules and routines – clear rules and expectations for the different parts of a day in school and discussion about what these rules are and why we have them. - Snack time – where we develop social skills by serving each other, helping each other and allowing people to make choices. - Show and Tell – where we learn to listen and appreciate the things that others enjoy and find important. - Activity choices – part of the EYFS curriculum is to allow children some choice in the activities they do with the expectation that they cooperate with others and be considerate in the way they play. - Group choices – sometimes we make decisions together and these opportunities are used to introduce the children to the concept of voting and accepting the choice of the majority. - Range of resources – we are carefully building up our range of resources so that they increasingly reflect the differing cultures within the UK. This includes the range of stories that children hear and experience. - Special events and celebrations that link to events in and out of school. British Values In KS1 and KS2: The principles introduced in Reception are continued and developed as children get older. In KS2 we expect children to gain a more detailed knowledge of some of the specific issues that lay behind these values. - At least one topic in each year focuses on another part of the world so that children learn to appreciate and value the diversity that exists across the world and within our country. As an Eco-school we also consider how we have a mutual responsibility to each other in the ways we look after the environment. - Debates and discussions are part of most topics and often feature within English as well where children often have to write from different viewpoint. - In PSHE children are regularly introduced to different aspects of the rule of law, particularly those to do with smoking, alcohol, drugs and the age of consent. Children are also introduced to some of the rules of the road. - PSHE is also the place where children consider how we relate to one another, how our actions impact on others and how to resist peer pressure. - E-safety is an integral part of our computing curriculum. Children are taught how to keep themselves safe online and to watch out for those things that might cause offence or draw them, or others, into inappropriate behaviour. - The School Council and The Eco-committee are made up of elected representatives giving children an introduction to the democratic process. Children learn of the origins of democracy in Ancient Greece. - The school’s Golden Rules are discussed and agreed through the School Council. - Children learn about other faiths through RE. The school follows the Northamptonshire Agreed Syllabus and children are encouraged to adopt a respectful, enquiry based approach to their learning. A series of visits to places of worship are being planned and visitors from different faith groups are invited into school when appropriate. - Assemblies are planned into weekly themes which recognise major celebrations, national events and relationship/behavioural themes. Stories from different faiths are used alongside Christian stories and stories that have no faith background. Where appropriate, assemblies are increasingly used to reflect on major news events. - The school takes an active role on different charitable events and have raised large amounts of money for different organisations. - The school subscribes to First News, My Weekly and Espresso all of which provide child appropriate comment on current affairs. These are used across KS2 and KS1. - We also look to use one off events to promote British Values – local magistrates visit Year 6 to explain their role and to conduct a mock trial on cyber bullying. The school also takes advantage of the ad hoc opportunities that come up to promote British values. This can be through specific events, times when we are addressing behaviour, the celebration of high quality work and many other ways. |Radicalisation – refers to the process by which a person comes to support terrorism and forms of extremism leading to terrorism. (The Prevent Duty: Departmental advice for schools and childcare providers – DfE- June 2015) Extremism – vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces, whether in this country or overseas. Terrorist groups very often draw on extremist ideas developed by extremist organisations.(The Prevent Duty: Departmental advice for schools and childcare providers – DfE- June 2015) The Prevent Duty – the legal duty on all public bodies to protect children and vulnerable adults from becoming drawn into radicalisation or extremism. This ranges from far right movements of a white supremacist nature through to Islamic State and alike.
Birds have always fascinated mankind. They are described as “glorified reptiles” which denote their line of evolution. These warm-blooded vertebrates have a beautiful plumage and very interesting habits like courtship, nest-building, parental care and migratory flights. Their wonderful song calls make our mornings and evenings very pleasant. The branch of Zoology which deals with study of birds is called Ornithology. Dr. Salini Ali is the celebrated ornithologist of India. Different birds and their habitat Body with black plumage, grey around the neck. This is a scavenger inhabiting human dwelling areas and is highly useful to man (Commensal). Male and female are alike and show no sexual dimorphism. Omnivorous (Jungle crow does not have a grey neck). It builds nests and looks after its young ones. - Cuckoo (Eudynamis sp.) Male shining metallic black feathers with a striking yellowish green bill and blood red eyes. Females are grayish brown and spotted and barred with white. So there is a well marked sexual dimorphism. Male has an attractive song call. Female has no song. It does not build nests and lays its eggs in crow’s nest and the young ones are hatched and looked after by foster parents. Its song is not heard in winter, but becomes noisy in spring and summer. - Pigeons (Columbia livia) Commonly called blue-rock pigeon. Color is slaty grey with glistening metallic green, purple, and magenta sheen on neck and upper breast. Two dark bars on wings and a band across the end of the tail. No sexual dimorphism, semi-domesticated. Commensal of man. - Mynah (Acridotherus tristis) Deep brown in color with bright yellow bill and legs. No sexual dimorphism. Always found in groups in groups of 2 to 10. Inhabits human dwelling areas. Builds nest. - Parrot (Psittacula krameri) Commonly called rose-ringed parakeet. Body slender with pointed tail. Feather grass green in color. Male has a black and red collar which is absent in the females. Bill red curved adapted for nut-cracking. Builds nests in hollow tree-trunks, crevices and holes of buildings. Found often in large flocks. It is a popular cage bird and can be taught to talk. - Owl (Bubo bubo) Commonly called great – horned owl. Large, heavy and robust birds. Color is dark brown, streaked and mottled with tawny buff and black. Head is large and bears two conspicuous black ear-tuffs or horns. Eyes large and round, forwardly directed. Legs fully feathered. Mainly noctural but frequently seen during day time. They feed on rodents and harmful insects pests and so is helpful to agriculturalists and hence has no to be protected. - Woodpecker (din opium benghalensis) Commonly called Golden – backed woodpecker. Small bird with distinctive golden – yellow and black plumage above and Buffy white with black streaks below. Entire crown and crest on the head is crimson in male and only party so in the female. Bill long, stout and pointed tongue protrusive and barb- tipped. Tails stiff and wedge-shaped. Wood boring habit. - Sparrow (passer domesticus) Small bird with upper surface earthy-brown, streaked with black, under parts whitish. Male has a black area on the throat and breast. Feeds on seeds and grains. Unfailing Commensal of man. Nest is a collection of straw and rubbish stuffed into a hole in the wall. Useful to agriculture as it destroys several insects’ pests. - Bulbul (Hypsipetes Ieucocephalus) Body grey- black or ash-brown. Beak and legs red in color. Head has a crest. Builds nests. - Kite (Milvus migrans) Body brown in color. Tail frocked. No sexual dimorphism. Beak sharp, strong and curved. Carnivorous. Lives in the neighborhood of man. Nesting season from September to April.
Kyoto University and a Japanese company are working together to develop World’s first wooden satellites that will help reduce space junk. At present, the Japanese researchers are conducting experiments with different types of wooden materials to make satellites resistant to different temperature changes and sunlight. Thereafter, the researchers will develop the engineering model of the satellite and then the manufacturing process of the flight model will begin. The researchers are of the opinion that if everything goes as per the plan, the first wooden satellite would be ready for launch by 2023. Takao Doi, a professor at Kyoto University told the BBC that all the satellites that re-enter the Earth’s atmosphere burn and produce small alumina particles, these particles float in the upper atmosphere for many years affecting the environment of the Earth. Takao Doi also added that the wooden satellites would safely burn upon entering the Earth’s atmosphere without releasing harmful substances. How many Satellites are currently there? Currently, about 6,000 satellites are circling around Earth’s orbit out of which only about 40% are operational and about 60% are just space junk. According to NASA over 500,000 pieces of space junk are tracked as they orbit the Earth Scientists are worried about the Space junk as more and more satellites are launched into space. As and when these satellites reach their end, or, are no longer usable, they are either left in orbit or they are deorbited and burned up into the Earth’s atmosphere. But neither of these methods are effective for the disposal of these satellites. When a satellite is left in the orbit, it further adds up to the thousands of pieces of space junk which is already present around the Earth. The space junk moves at a speed of over 22,300 mph and it can cause serious damages to other satellites and rockets in space. Generally, space satellites are made up of aluminium because it is durable, lightweight and strong enough to resist extreme temperature and space radiation. When a satellite is burned up into the Earth’s atmosphere, the aluminium used in satellites produces small particles of ‘alumina’ which can stay in the atmosphere for years and can damage the ozone layer. Several researchers are already trying to find different options to cut down the space junk. Hope the launch of the environmental friendly wooden satellites would bring revolution in satellite designing and reduce the space junk.
Loose Parts Play The Theory of Loose Parts One of our favorite ways to encourage divergent thinking and play at camp is through the idea of loose parts play. The Theory of Loose Parts first proposed by architect Simon Nicholson, “In any environment, both the degree of inventiveness and creativity, and the possibility of discovery, are directly proportional to the number and kind of variables in it.” (The Theory of Loose Parts: An important principle for design methodology, 1972) Basically the more kids can move stuff around and experiment the more they will have a chance to be creative. Loose Parts at Camp What that means for us at camp is giving kids loose parts and permission to play with them, might be the best way to build skills like creativity, collaboration, and critical thinking. This can be accomplished with natural play areas, fort building, toys like Leggos or connects, arts and crafts projects, and other experiences where the kids have the autonomy to control aspects of the environment. Gary Forester calls this idea “Empty Box” thinking because often the best gift kids can get is a big empty box to play in and create magical new experiences. One simple loose parts activity all camps can do, Micheal Brandwine developed called FOTAY (Figure Out The Activity Yourself). FOTAY is a brilliantly simple. With a group of kids gather up a random bunch of program supplies, like fun noodles, balls, rope, and anything you have just lying around. Then let the kids make up a game or activity with the loose parts. Jason Smith at YMCA Camp Kitaki encourages his staff to begin the activity by giving the kids a bunch of those loose parts and pretending to have forgotten an old game.
The foundations in the Mater Christi Preschool are Family, Faith and Community. Children are unique in how they perceive the world. Parents, grandparents, siblings, birth order, gender, learning style and culture are all important pieces to consider when encouraging a successful Preschool education experience and education the “whole child.” It is our hope that we can develop in each child a positive self image and a love of learning and going to school. In the 3-year-old Pre-School class, we provide many activities to do just that. Each activity that is presented to the children is a result of much planning. Children like to be actively involved. When more than one sense is used in teaching a task, learning is greatly improved. - Art is used as a teaching device, as well as a fun way to play, in the preschool class. Freedom with art supplies encourages using the imagination and is also a good release for emotions. Eye-hand coordination is also developed. - Math is taught through games adapted to follow the theme of the day. For example, counting how carrot sticks were eaten at lunch time. Other math games include identifying shapes, measuring, sorting, comparisons and counting, counting, and more counting. - Science ideas that are simple to adults often are magic to the young preschooler. The children are introduced to the meteorology, biology, chemistry and physics in very simple forms. - Language Development includes time set aside in group time for show and tell, discussions about different subjects, story and most importantly, books. - Creative Dramatics and movement encourage free expression and also allows the imagination to surface. This is done through the use of puppets; dress up clothes, blocks, and various props added to the classroom. - Games and Social Activities as a group happen often in the 3 year classroom. Games that are played in a group teach the children how to cooperate with one another, and learn to take turns. - Music is used as an important teaching tool in the 3 year class. It is easy to remember the rules at school when a song is sung to remind the children what to do next. Music activities also include activity CD’s to help children to listen and follow directions. Children are introduce to a number of instruments and encouraged to play as they march around the classroom. Anything can be taught through music. - Physical Development is included through using large muscle play on our playground, P.E. classes, through activity CD’s or songs and our daily morning workout. Pre-School students enjoy Yoga in class. Yoga is an opportunity to learn “self regulation” throughout movement and breathing. Because young children learn best through play, the Mater Christ Pre-School program is play based. This means that we facilitate play which encourages learning in the eight domains of the Vermont Early Learning Standards. These eight domains are: Approaches to Learning - Social and Emotional Development - Language, Literacy, and Communication - Social Studies - Creative Expression - Physical Health and Development Our program has a daily routine of circle time and morning meeting, centers, creative movement, book time, prayer time, free play, outdoor time, music, creative/artistic expression, and rest time.
We’ve all heard the phrase “every little bit adds up.” Such is the case with food waste. It might not seem as if we throw away all that much food (whether it’s leftover pizza one day or lettuce scraps from the back of the refrigerator the next). However, the truth is that food waste is a significant problem in America and around the world. The average American household throws away about 32% of the food that it buys, according to the American Journal of Agricultural Economics. According to estimates by the U.S. Department of Agriculture, that amounts to about $1,500 in wasted food each year for a family of four. Then, there are the larger environmental costs as food waste releases harmful methane emissions. So how can you realistically reduce your food waste? It starts with your grocery trips. You should think carefully about how much food you typically eat each week and buy only as much as you need. Beyond that, think about keeping a food waste log. Consider printing out this log each week and attaching it to your refrigerator or placing it on your kitchen counter. Every time you throw food away, be sure to record what you threw away, how much you threw away, why you threw it away and how much it likely cost. The idea is that, as you fill out this log each week, you will gradually become more aware of how much you throw away and hopefully begin to throw away less. In addition to maintaining a food waste log, you should also try to keep your food fresh for as long as possible. There are a number of unique tips and tricks that you can use. For example, you should store celery in foil, not plastic, to keep it crisp for longer periods of time. As another example, you should remove the green tops of carrots, which suck the moisture out of carrots. Print out this food saver cheat sheet with strategies on storing more than 20 common foods to maximize freshness. Finally, you should familiarize yourself with the rules surrounding “best by” dates on products. In most cases (with the exception of infant formula), these dates are simply guidelines. They are not federal requirements. Therefore, many common foods could be safe to eat after their listed expiration dates. Everyone has at least some work to do in cutting down the amount of food that they throw away! According to the American Journal of Agricultural Economics, even the least wasteful American households throw away 9% of the food that they buy. Download all of these food waste reduction resources and reduce your food waste today. 976 total views, 4 views today
When did it take place. Or, write ten questions that test other students' understanding of the story. Each student creates a "Ten Facts About [book title]" sheet that lists ten facts he or she learned from reading the book. Our subject-matter authors will do the whole job for you and deliver a perfect report before your deadline. Pretend you are a talk show host and interview the main character. We employ only the most knowledgeable, creative and experienced authors. When did it take place. Ideas for cyber book reports. Education World offers 25 ideas that might help you do just that. The student should include in the resume a statement of the applicant's goals and a detailed account of his or her experience and outside interests. In a recent posting to the Teachers. Then the student creates a word search puzzle that includes the glossary words. Vocabulary Create a ten-word glossary of unfamiliar words from the book. Prove It in Five Minutes. Tell them about what your children said when you read it to them, what made you laugh, or what memories you have associated with reading it. Each student creates a Venn diagram to illustrate similarities and differences in the traits of two of the main characters in a book just completed. I also started to read more book reviews done by others, mostly on blogs. Most likely, if a person is reading your review of a book, they already want to hear your stories. Choose two characters from the story and write a conversation they might have. So I did that and thought I was doing great. Whether the story setting is real or imaginary, design a travel brochure to entice visitors. Write step-by-step directions and rules that are easy to follow. The title of the newspaper should be something appropriate to the book. And stories are always more exciting than a chapter-by-chapter analysis. Vocabulary Create a ten-word glossary of unfamiliar words from the book. Here are three tips for writing more creative book reviews. Sep 02, · How to Write a Book Report Four Parts: Researching and Outlining Your Report Writing the Body of Your Report Finishing Up Your Report Sample Book Report and Summaries Community Q&A Writing a book report may not seem fun at first, but it gives you a great chance to really understand a work and its author%(97). If an idea doesn't include enough writing, creative (sneaky!) teachers will usually find a way to work it in use the idea to supplement or replace parts of favorite book report formats. Descriptive writing. Book reports are popular assignments in school. Get the details on what these assignments entail and how to write a great book report. Help your students make the books they read come alive with these 12 creative book report ideas and examples. Go above and beyond with these beauties. Visit School Leaders Now; Facebook; WeAreTeachers. Ideas, Inspiration, and Giveaways for Teachers. Classroom Ideas; Free Printables; Students read a book and write a summary. Then, they. A book report should summarize the book that you read. Book reports are common assignments for students in elementary school through high school. A well-written book report lets your teacher know that you read the book and understand it. Sep 02, · To write a book report, start by introducing the author and the name of the book and then briefly summarizing the story. Next, discuss the main themes and point out what you think the author is trying to suggest to the reader%().How to write a creative book report
Here are five ideas: Let them show you how to use their favourite app or do something that they have learned in school. Please try again later. Some may require registration and download. SSRVM Curriculum, Computer Science, 2007 Edition The teaching material for the SSRVM schools is developed by the SSRVM Academic Council. To help us improve GOV.UK, we’d like to know more about your visit today. The primary curriculum is designed to nurture the child in all dimensions of his or her life—spiritual, moral, cognitive, emotional, imaginative, aesthetic, social and physical. Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned. Resources drawn from: Google, ThinkUKnow, Code Club, amongst others and are of very high quality. Don’t include personal or financial information like your National Insurance number or credit card details. It is written in response to the pressing need to provide academic coherence to the rapid growth of computing and technology in the modern world, alongside the need for an educated public that can utilize that technology most effectively to the benefit of humankind. Ask children how they have been using technology this week, what their favourite app is etc. EXECUTIVE SUMMARY PAGE 3. We can’t process new registrations at the moment. Pupils should be taught to: 1. understand what algorithms are, how they are implemented as programs on digital devices, and that programs execute by following precise and unambiguous instructions 2. create and debug simple programs 3. use logical reasoning to predict the behaviour of simple programs 4. use technology purposefully to create, organise, store, manipulate and retrieve digital content 5. recognise common uses of information technology beyond school 6. use technology safely and respectfully, keepi… browser support page A complete computing curriculum for Primary schools to use for FREE. This publication is available at https://www.gov.uk/government/publications/national-curriculum-in-england-computing-programmes-of-study/national-curriculum-in-england-computing-programmes-of-study. You are viewing this site in an unsupported browser. The primary curriculum aims to provide a broad learning experience and encourages a rich variety of approaches to teaching and learning that cater for the different needs of individual children. Make sure they feel they can come to you, should an issue arise for them. In secondary schools, the time spent learning computer skills tends to increase but is significantly influenced by whether Computer Studies is offered as a school subject or not and by the number of computers that schools possess. Computing has deep links with mathematics, science and design and technology, and provides insights into both natural and artificial systems. Computing doesn’t stretch to early years (EYFS), but technology is mentioned in the EYFS framework. All pupils must have the opportunity to study aspects of information technology and computer science at sufficient depth to allow them to progress to higher levels of study or to a professional career. Unplugged tasks, where concepts are taught away from the computer, using techniques such as role play, can also work well. In Northern Ireland, technology is included within the World Around Us area of learning. Keep in touch with family members by composing emails together or using services like Skype to make video calls. Don’t worry we won’t send you spam or share your email address with anyone. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected]. Anne DeMallie, Computer Science and STEM Integration Specialist at Massachusetts Department of Elementary and … We’ll send you a link to a feedback form. Discuss how useful these tools can be when used responsibly. The school purchases a site license and renews this on an annual basis. We use this information to make the website work as well as possible and improve government services. See our Enter the login details given to you by your teacher. PAGE 4 Computer use in the … You can change your cookie settings at any time. This supplements the curriculum prescribed by Board (ICSE/CBSE/State) with which a school may be affiliated. For the Curriculum for Excellence experiences and outcomes for technologies in Scotland visit, Details about information and communication technology in the national curriculum for Wales can be found on the. All content is available under the Open Government Licence v3.0, except where otherwise stated, National restrictions in England until 2 December, Secondary curriculum, key stage 3 and key stage 4 (GCSEs), National curriculum in England: computing programmes of study, nationalarchives.gov.uk/doc/open-government-licence/version/3, Coronavirus (COVID-19): guidance and support, Transparency and freedom of information releases, can understand and apply the fundamental principles and concepts of computer science, including abstraction, logic, algorithms and data representation, can analyse problems in computational terms, and have repeated practical experience of writing computer programs in order to solve such problems, can evaluate and apply information technology, including new or unfamiliar technologies, analytically to solve problems, are responsible, competent, confident and creative users of information and communication technology, understand what algorithms are, how they are implemented as programs on digital devices, and that programs execute by following precise and unambiguous instructions, use logical reasoning to predict the behaviour of simple programs, use technology purposefully to create, organise, store, manipulate and retrieve digital content, recognise common uses of information technology beyond school, use technology safely and respectfully, keeping personal information private; identify where to go for help and support when they have concerns about content or contact on the internet or other online technologies, design, write and debug programs that accomplish specific goals, including controlling or simulating physical systems; solve problems by decomposing them into smaller parts, use sequence, selection, and repetition in programs; work with variables and various forms of input and output, use logical reasoning to explain how some simple algorithms work and to detect and correct errors in algorithms and programs, understand computer networks, including the internet; how they can provide multiple services, such as the World Wide Web, and the opportunities they offer for communication and collaboration, use search technologies effectively, appreciate how results are selected and ranked, and be discerning in evaluating digital content, select, use and combine a variety of software (including internet services) on a range of digital devices to design and create a range of programs, systems and content that accomplish given goals, including collecting, analysing, evaluating and presenting data and information, use technology safely, respectfully and responsibly; recognise acceptable/unacceptable behaviour; identify a range of ways to report concerns about content and contact, design, use and evaluate computational abstractions that model the state and behaviour of real-world problems and physical systems, understand several key algorithms that reflect computational thinking [for example, ones for sorting and searching]; use logical reasoning to compare the utility of alternative algorithms for the same problem, use 2 or more programming languages, at least one of which is textual, to solve a variety of computational problems; make appropriate use of data structures [for example, lists, tables or arrays]; design and develop modular programs that use procedures or functions, understand simple Boolean logic [for example, AND, OR and NOT] and some of its uses in circuits and programming; understand how numbers can be represented in binary, and be able to carry out simple operations on binary numbers [for example, binary addition, and conversion between binary and decimal], understand the hardware and software components that make up computer systems, and how they communicate with one another and with other systems, understand how instructions are stored and executed within a computer system; understand how data of various types (including text, sounds and pictures) can be represented and manipulated digitally, in the form of binary digits, undertake creative projects that involve selecting, using, and combining multiple applications, preferably across a range of devices, to achieve challenging goals, including collecting and analysing data and meeting the needs of known users, create, reuse, revise and repurpose digital artefacts for a given audience, with attention to trustworthiness, design and usability, understand a range of ways to use technology safely, respectfully, responsibly and securely, including protecting their online identity and privacy; recognise inappropriate content, contact and conduct, and know how to report concerns, develop their capability, creativity and knowledge in computer science, digital media and information technology, develop and apply their analytic, problem-solving, design, and computational thinking skills, understand how changes in technology affect safety, including new ways to protect their online privacy and identity, and how to report a range of concerns. Kumon Math Workbooks Grade 2 Pdf, Country Fried Steak Cracker Barrel Price, Who Started The Numbers Racket, Spicy Chicken Pasta Recipes, Chlorofluorocarbons Effects On Environment, Beef Alfredo Lasagna, Doxy Jazz Band, Healthy Catfish Recipes, 4k Dog Wallpaper For Mobile, Friends Funny Quotes,
Videos, music and oral history related to potatoes. The potato is a root vegetable native to the Americas, a starchy tuber of the plant Solanum tuberosum, and the plant itself is a perennial in the nightshade family, Solanaceae. Wild potato species, originating in modern-day Peru, can be found throughout the Americas, from the United States to southern Chile. The potato was originally believed to have been domesticated by indigenous peoples of the Americas independently in multiple locations, but later genetic testing of the wide variety of cultivars and wild species traced a single origin for potatoes. In the area of present-day southern Peru and extreme northwestern Bolivia, from a species in the Solanum brevicaule complex, potatoes were domesticated approximately 7,000–10,000 years ago. In the Andes region of South America, where the species is indigenous, some close relatives of the potato are cultivated. Potatoes were introduced to Europe from the Americas in the second half of the 16th century by the Spanish. Today they are a staple food in many parts of the world and an integral part of much of the world's food supply. As of 2014, potatoes were the world's fourth-largest food crop after maize (corn), wheat, and rice. Following millennia of selective breeding, there are now over 5,000 different types of potatoes. Over 99% of presently cultivated potatoes worldwide descended from varieties that originated in the lowlands of south-central Chile. The importance of the potato as a food source and culinary ingredient varies by region and is still changing. It remains an essential crop in Europe, especially Northern and Eastern Europe, where per capita production is still the highest in the world, while the most rapid expansion in production over the past few decades has occurred in southern and eastern Asia, with China and India leading the world in overall production as of 2018. Like the tomato, the potato is a nightshade in the genus Solanum, and the vegetative and fruiting parts of the potato contain the toxin solanine which is dangerous for human consumption. Normal potato tubers that have been grown and stored properly produce glycoalkaloids in amounts small enough to be negligible to human health, but if green sections of the plant (namely sprouts and skins) are exposed to light, the tuber can accumulate a high enough concentration of glycoalkaloids to affect human health. |Provenance:||LinBi - https://linbi.eu/|
Attitude and Heading Sensors from CH Robotics can provide orientation information using both Euler Angles and Quaternions. Compared to quaternions, Euler Angles are simple and intuitive and they lend themselves well to simple analysis and control. On the other hand, Euler Angles are limited by a phenomenon called "gimbal lock," which prevents them from measuring orientation when the pitch angle approaches +/- 90 degrees. Quaternions provide an alternative measurement technique that does not suffer from gimbal lock. Quaternions are less intuitive than Euler Angles and the math can be a little more complicated. This application note covers the basic mathematical concepts needed to understand and use the quaternion outputs of CH Robotics orientation sensors. 2. Quaternion Basics A quaternion is a four-element vector that can be used to encode any rotation in a 3D coordinate system. Technically, a quaternion is composed of one real element and three complex elements, and it can be used for much more than rotations. In this application note we'll be ignoring the theoretical details about quaternions and providing only the information that is needed to use them for representing the attitude of an orientation sensor. The attitude quaternion estimated by CH Robotics orientation sensors encodes rotation from the "inertial frame" to the sensor "body frame." The inertial frame is an Earth-fixed coordinate frame defined so that the x-axis points north, the y-axis points east, and the z-axis points down as shown in Figure 1. The sensor body-frame is a coordinate frame that remains aligned with the sensor at all times. Unlike Euler Angle estimation, only the body frame and the inertial frame are needed when quaternions are used for estimation (Understanding Euler Angles provides more details about using Euler Angles for attitude estimation). Let the vector be defined as the unit-vector quaternion encoding rotation from the inertial frame to the body frame of the sensor: where is the vector transpose operator. The elements b, c, and d are the "vector part" of the quaternion, and can be thought of as a vector about which rotation should be performed. The element a is the "scalar part" that specifies the amount of rotation that should be performed about the vector part. Specifically, if is the angle of rotation and the vector is a unit vector representing the axis of rotation, then the quaternion elements are defined as In practice, this definition needn't be used explicitly, but it is included here because it provides an intuitive description of what the quaternion represents. CH Robotics sensors output the quaternion when quaternions are used for attitude estimation. 3. Rotating Vectors Using Quaternions The attitude quaternion can be used to rotate an arbitrary 3-element vector from the inertial frame to the body frame using the operation That is, a vector can rotated by treating it like a quaternion with zero real-part and multiplying it by the attitude quaternion and its inverse. The inverse of a quaternion is equivalent to its conjugate, which means that all the vector elements (the last three elements in the vector) are negated. The rotation also uses quaternion multiplication, which has its own definition. Define quaternions and . Then the quaternion product is given by To rotate a vector from the body frame to the inertial frame, two quaternion multiplies as defined above are required. Alternatively, the attitude quaternion can be used to construct a 3x3 rotation matrix to perform the rotation in a single matrix multiply operation. The rotation matrix from the inertial frame to the body frame using quaternion elements is defined as Then the rotation from the inertial frame to the body frame can be performed using the matrix multiplication Regardless of whether quaternion multiplication or matrix multiplication is used to perform the rotation, the rotation can be reversed by simply inverting the attitude quaternion before performing the rotation. By negating the vector part of the quaternion vector, the operation is reversed. 4. Converting Quaternions to Euler Angles CH Robotics sensors automatically convert the quaternion attitude estimate to Euler Angles even when in quaternion estimation mode. This means that the convenience of Euler Angle estimation is made available even when more robust quaternion estimation is being used. If the user doesn't want to have the sensor transmit both Euler Angle and Quaternion data (for example, to reduce communication bandwidth requirements), then the quaternion data can be converted to Euler Angles on the receiving end. The exact equations for converting from quaternions to Euler Angles depends on the order of rotations. CH Robotics sensors move from the inertial frame to the body frame using first yaw, then pitch, and finally roll. This results in the following conversion equations: See the chapter on Understanding Euler Angles for more details about the meaning and application of Euler Angles. When converting from quaternions to Euler Angles, the atan2 function should be used instead of atan so that the output range is correct. Note that when converting from quaternions to Euler Angles, the gimbal lock problem still manifests itself. The difference is that since the estimator is not using Euler Angles, it will continue running without problems even though the Euler Angle output is temporarily unavailable. When the estimator runs on Euler Angles instead of quaternions, gimbal lock can cause the filter to fail entirely if special precautions aren't taken.
Status & Threats Temperate grasslands are now the most altered ecosystem on the planet, and the most endangered habitat in most countries where they occur. COASTAL PRAIRIE: A DISAPPEARING RESOURCE Noss and Peters (1995) provide a sobering summary of the state of California grasslands. Agriculture, invasion by exotic species, development and other human-related activities has reduced California native grasslands by 99%. As a result, California grasslands are among 21 of the most-endangered ecosystems in the United State. More than 25 grassland species are threatened or endangered (Noss and Peters 1995). Coastal prairie have fared only slightly better than California grasslands as a whole: only 10% of the native coastal prairie communities remain (Noss and Peters 1995). Coastal grasslands continue to be threatened by various factors (Norton et al. 2007; Wade Belew, CNGA Presentation 2011) including: Altered watershed hydrology Limited knowledge and under-appreciation Coastal prairie remnant, Santa Rosa, CA. Photo: K. Kraft 2009. Coastal grasslands continue to be desirable areas for development and agriculture (CNPS 2001). Over 24% of the coastal prairie from San Francisco north along the western slopes of California’s north coast has been urbanized, the most of any other major plant community in the United States, and this is thought to be a gross underestimate (Ford and Hayes 2007; Loveland and Hutcheson 1995). Los Angeles is a case study for coastal prairie loss. Before the extensive urban development that now characterizes the Los Angeles area, 95 square kilometers (37 square miles) of coastal prairie was rich in wildflowers and dotted with vernal pools (Mattoni and Longcore 1997). The last significant remnant of Los Angeles coastal prairie, located near the Los Angeles International Airport, was destroyed in the late 1960s. Map of former coastal prairie habitat in Los Angeles. LAX = Los Angeles International Airport. Illustration from: http://www.urbanwildlands.org Flat treeless plains with deep, fertile soils means grasslands are ideal landscapes for agriculture. Cultivation and mis- managed grazing destroy indigenous species and the natural landscape features necessary for their support. "California’s annual grasslands, occupying extensive areas in the Central Valley and along the Pacific Coast, form a vegetation type that is unique in North America: a distinct and extensive community type consisting largely of introduced species” (McNaughton 1968). Grasslands dominated by non-native annuals are found throughout California. However, most of these grasslands still harbor a variety of native species, depending on their use history. The proportion of non-native plants can range from 50 to 90 % or more (Biswell 1956). Altered fire regimes, cultivation, and the long-term uncontrolled grazing that accompanied the introduction of invasive annual grasses are the major contributors to the conversion of native perennial grasslands to exotic annual-dominated grasslands and to their continued persistence (Norton, et al. 2007). Exotic annual species were purposely seeded as forage plants while others were accidentally introduced. Because a wide range of wildlife species now depend on introduced annuals for food and shelter, the California Department of Fish & Game recommends that many of the annual plants introduced into grasslands should be considered naturalized plant species, instead of undesirable invaders, and managed as such (Kie 2005). Development of coastal prairie at Portuguese Beach on the Sonoma Coast. Google Earth (TM). Grasslands are dynamic disturbance-dependent ecosystems that are maintained by a variety of disturbance agents such as fire and grazing. At the most basic level, fire and animals maintain grasslands by destroying tree saplings and shrubs that would eventually colonize the area and crowd out grassland plants. Common woody invaders include coyote bush (Baccharis pillularis) and Douglas fir (Pseudotsuga menziesii). While invasion of native species is a natural process, the patchiness of this diminished habitat has made these process a threat. In some cases, managers must decide between managing for either grassland or the shrubby coastal scrub habitat. This is because both communities, although once widespread, are now considered rare and endangered habitats (Ford and Hayes 2007). Fire Suppression- Fire suppression, which began in earnest in the 1930s, has increased the length of the fire return interval in grasslands. Greenlee and Langenheim (1990) estimate the mean fire return interval in prairies in recent time is from 20-30 years, up from 1-15 years during the lightening, aboriginal, and Spanish periods. Dramatic effects of both under and overgrazing (From Edwards 1992.) Removal of Domestic Grazers- California grasslands evolved with and once supported a wide variety of native animal grazers. In the 1700s, the Franciscan missionaries brought cattle and sheep with them and for the next 200 years California’s grasslands were increasingly degraded from poor grazing management practices. For many decades, grassland conservation efforts focused on retiring cattle and sheep operations from the land. Biswell (1956) discussed conservation areas under “grazing protection” as early as the 1930s. It was thought that grazing removal led to an increase in perennial grasses. Instead, non-native annuals such as ripgut brome (Bromus diandrus) took over. In coastal areas some of the ripgut brome was eventually replaced by the native creeping wild-rye grass (Elymus triticoides), a native rhizomatous grass, along with tall growing non-natives weeds such as black mustard (Brassica nigra), poison hemlock (Conium maculatum) (Biswell 1956). The timing and intensity of domestic grazing can have varying effects on native plants and animals. Mismanaged grazing reduces grassland productivity and biodiversity while increasing erosion and the number of unwanted species. Well-managed grazing operations can maintain quality grassland habitat for native plants and animals while preventing erosion and the invasion of woody shrubs. From 57% to 80% of California’s grasslands are privately owned and managed by ranchers (California Partners in Flight 2000). Ranchers have joined with environmentalists and resource professionals in groups like the California Rangeland Conservation Coalition (http://www.carangeland.org/) to work towards their shared goal of healthy intact grassland ecosystems. Litter Accumulation- In the absence of disturbances, such as fire and grazing, a thick layer of plant litter accumulates in grasslands. The effects of litter has been shown to be the primary mechanism controlling species diversity in fescue grasslands in Canada (Lamb 2008). Litter accumulation can reduce species diversity through shading, physically interfering with plant growth, alteration of germination cues, providing shelter for invertebrate herbivores and seed predators and encouraging pathogens (Facelli and Pickett 1991; Lamb 2008; Xiong and Nilsson 1999). Litter accumulation favors annual introduced grasses, such as soft chess (Bromus hordeaceous), over annual wildflowers that germinate on exposed soil and do not germinate when covered by layers of litter (Howard 1998). Biswell (1956) noted a common pattern when grazing is removed from California annual grasslands that were formerly heavily grazed: the grasslands changed from being dominated by forbs to soft chess (Bromus hordeaceous), to slender oat (Avena barbata), and finally to ripgut brome (Bromus diandrus), a change largely due to litter accumulation (Biswell 1956). Accumulated forage and litter favors many less desirable species, such as ripgut brome (Bromus diandrus), which has the ability to tolerate shade and has seeds with long barbs (awns) with that are well adapted to work their way down through the litter barrier to the mineral soil (Biswell 1956). Nitrogen inputs from human activities into ecosystems in the United States doubled between 1961 and 1997, primarily from inorganic nitrogen fertilizers and nitrogen oxide emissions from fossil fuels (Fenn, et al. 2003). Grasslands accumulate and store nitrogen applied in inorganic fertilizers and also deposited from air pollution (primarily from the nitrogen oxides resulting from the burning of fossil fuels). The influx of large amounts of nitrogen into grasslands can have deleterious consequences for biodiversity: Nitrogen deposition from auto exhaust has been linked to the increase of Italian ryegrass (Festuca perennis, formerly Lolium perenne) in serpentine grasslands—grasslands with harsh, nutrient poor soils that often act as refuges for native plants—in the San Francisco Bay area (Harrison and Viers 2007:154) Nitrogen pollution can reduce the diversity of herbaceous species and can negatively affect microbial activity, decreasing the rate of decomposition of organic matter (http://www.nerc.ac.uk/publications/other/documents/gane_pollutiongrasslands.pdf) . Automobile pollution from Highway 101 has been cited as a threat to endangered Checkerspot butterfly (Eilerin 2006).
This realm of science was brand new in the Victorian era! We learn about them in school, and then years later we learn that a species we read about and saw in museums was portrayed with claws and fangs when it didn’t have them at all. Such is the way of dinosaurs, we have so little to go on yet we we know about them. But, up until the Victorian era, the term dinosaur didn’t even exist and the general public lived in ignorant bliss of the huge reptiles that once roamed the planet. In the 1840s, the word had yet to be coined, but that would all change with the work of Sir Richard Owen in London. Sir Richard Owen at one point taught natural history to Queen Victoria’s children, he was respected in his field, though he did argue with a number of his fellow scientists, including Charles Darwin. Owen formed the theory of what dinosaurs were after reading the work of other esteemed scientists like, Gideon Mantell. Owen named this new genre of animal dinosauria, meaning “terrible lizards” in Greek. He first published his findings in a paper in 1842 that set the scientific community on its head. The sensationalist tone of his classifications would soon be followed by even more sensational public displays. Owen sought to give life to the fossils that had been found and enlisted the help of scientific artists to help the world visualize what they might have looked like. Overnight the Victorians went from a young world to one in which monsters had roamed the earth sometime long before humanity had recorded history. Owen did not believe in evolution though he was incredibly dedicated to natural history. When Owen commissioned from artist Benjamin Waterhouse Hawkins the first dinosaur model, the top was kept open and a dining table set up inside. And in true Victorian style a grand New Year’s Eve feast was served to Owens and his colleagues inside this new dinosaur creature. Just imagine feasting inside the giant model of an animal that was perviously unknown to humanity! The models, of course, were inaccurate because there were still few extensive studies of dinosaurs at the time. But, this was one of the first times when models of extinct animals had, in a sense, brought them to life for the world to see. The dinosaur models commissioned for the Crystal Palace in the 1850s, while not totally scientifically correct, were the first examples that regular people could see of what these mysterious dinosaurs might have looked like. Considered to be of historical importance, the dinosaur models are still on display and have recently received a restoration. Owen’s plans to build a museum in Central Park were literally smashed along with his models, but his name lives on as one of the pioneers of modern science and public learning. The infamous corruption and scandal of William “Boss” Tweed put a halt to the museum, and the models destroyed. And Owen went back to England to campaign for a separate museum of natural history in London, which was successful and opened in 1881, with Owen serving as it first director and making it possible for future generations to learn about all manner of creatures, including his famous dinosaurs.
This article needs additional citations for verification. (September 2010) (Learn how and when to remove this template message) In chemistry, bases are substances that, in aqueous solution, release hydroxide (OH−) ions, are slippery to the touch, can taste bitter if an alkali, change the color of indicators (e.g., turn red litmus paper blue), react with acids to form salts, promote certain chemical reactions (base catalysis), accept protons from any proton donor or contain completely or partially displaceable OH− ions. Examples of bases are the hydroxides of the alkali metals and the alkaline earth metals (NaOH, Ca(OH)2, etc.—see alkali hydroxide and alkaline earth hydroxide). In water, by altering the autoionization equilibrium, bases yield solutions in which the hydrogen ion activity is lower than it is in pure water, i.e., the water has a pH higher than 7.0 at standard conditions. A soluble base is called an alkali if it contains and releases OH− ions quantitatively. However, it is important to realize that basicity is not the same as alkalinity. Metal oxides, hydroxides, and especially alkoxides are basic, and conjugate bases of weak acids are weak bases. Bases can be thought of as the chemical opposite of acids. However, some strong acids are able to act as bases. Bases and acids are seen as opposites because the effect of an acid is to increase the hydronium (H3O+) concentration in water, whereas bases reduce this concentration. A reaction between an acid and a base is called neutralization. In a neutralization reaction, an aqueous solution of a base reacts with an aqueous solution of an acid to produce a solution of water and salt in which the salt separates into its component ions. If the aqueous solution is saturated with a given salt solute, any additional such salt precipitates out of the solution. For a substance to be classified as an Arrhenius base, it must produce hydroxide ions in an aqueous solution. Arrhenius believed that in order to do so, the base must contain hydroxide in the formula. This makes the Arrhenius model limited, as it cannot explain the basic properties of aqueous solutions of ammonia (NH3) or its organic derivatives (amines). There are also bases that do not contain a hydroxide ion but nevertheless react with water, resulting in an increase in the concentration of the hydroxide ion. An example of this is the reaction between ammonia and water to produce ammonium and hydroxide. In this reaction ammonia is the base because it accepts a proton from the water molecule. Ammonia and other bases similar to it usually have the ability to form a bond with a proton due to the unshared pair of electrons that they possess. In the more general Brønsted–Lowry acid–base theory, a base is a substance that can accept hydrogen cations (H+)—otherwise known as protons. In the Lewis model, a base is an electron pair donor. General properties of bases include: - Concentrated or strong bases are caustic on organic matter and react violently with acidic substances. - Aqueous solutions or molten bases dissociate in ions and conduct electricity. - Reactions with indicators: bases turn red litmus paper blue, phenolphthalein pink, keep bromothymol blue in its natural colour of blue, and turn methyl orange-yellow. - The pH of a basic solution at standard conditions is greater than seven. - Bases are bitter. Reactions between bases and water The following reaction represents the general reaction between a base (B) and water to produce a conjugate acid (BH+) and a conjugate base (OH−): - B(aq) + H2O(l) ⇌ BH+(aq) + OH−(aq) The equilibrium constant, Kb, for this reaction can be found using the following general equation: - Kb = [BH+][OH−]/[B] In this equation, the base (B) and the extremely strong base (the conjugate base OH−) compete for the proton. As a result, bases that react with water have relatively small equilibrium constant values. The base is weaker when it has a lower equilibrium constant value. Neutralization of acids - NaOH → Na+ and similarly, in water the acid hydrogen chloride forms hydronium and chloride ions: - HCl + H 2O → H When the two solutions are mixed, the H ions combine to form water molecules: → 2 H If equal quantities of NaOH and HCl are dissolved, the base and the acid neutralize exactly, leaving only NaCl, effectively table salt, in solution. Weak bases, such as baking soda or egg white, should be used to neutralize any acid spills. Neutralizing acid spills with strong bases, such as sodium hydroxide or potassium hydroxide, can cause a violent exothermic reaction, and the base itself can cause just as much damage as the original acid spill. Alkalinity of non-hydroxides Bases are generally compounds that can neutralize an amount of acids. Both sodium carbonate and ammonia are bases, although neither of these substances contains OH− groups. Both compounds accept H+ when dissolved in protic solvents such as water: - Na2CO3 + H2O → 2 Na+ + HCO3− + OH− - NH3 + H2O → NH4+ + OH− From this, a pH, or acidity, can be calculated for aqueous solutions of bases. Bases also directly act as electron-pair donors themselves: - CO32− + H+ → HCO3− - NH3 + H+ → NH4+ A base is also defined as a molecule that has the ability to accept an electron pair bond by entering another atom's valence shell through its possession of one electron pair. There are a limited number of elements that have atoms with the ability to provide a molecule with basic properties. Carbon can act as a base as well as nitrogen and oxygen. Fluorine and sometimes rare gases possess this ability as well. This occurs typically in compounds such as butyl lithium, alkoxides, and metal amides such as sodium amide. Bases of carbon, nitrogen and oxygen without resonance stabilization are usually very strong, or superbases, which cannot exist in a water solution due to the acidity of water. Resonance stabilization, however, enables weaker bases such as carboxylates; for example, sodium acetate is a weak base. A strong base is a basic chemical compound that can remove a proton (H+) from (or deprotonate) a molecule of even a very weak acid (such as water) in an acid-base reaction. Common examples of strong bases include hydroxides of alkali metals and alkaline earth metals, like NaOH and Ca(OH) 2, respectively. Due to their low solubility, some bases, such as alkaline earth hydroxides, can be used when the solubility factor is not taken into account. One advantage of this low solubility is that "many antacids were suspensions of metal hydroxides such as aluminium hydroxide and magnesium hydroxide." These compounds have low solubility and have the ability to stop an increase in the concentration of the hydroxide ion, preventing the harm of the tissues in the mouth, oesophagus, and stomach. As the reaction continues and the salts dissolve, the stomach acid reacts with the hydroxide produced by the suspensions. Strong bases hydrolyze in water almost completely, resulting in the leveling effect." In this process, the water molecule combines with a strong base, due to the water's amphoteric ability; and, a hydroxide ion is released. Very strong bases can even deprotonate very weakly acidic C–H groups in the absence of water. Here is a list of several strong bases: The cations of these strong bases appear in the first and second groups of the periodic table (alkali and earth alkali metals). Tetraalkylated ammonium hydroxides are also strong bases since they dissociate completely in water. Guanidine is a special case of a species that is exceptionally stable when protonated, analogously to the reason that makes perchloric acid and sulfuric acid very strong acids. Acids with a p Ka of more than about 13 are considered very weak, and their conjugate bases are strong bases. Group 1 salts of carbanions, amides, and hydrides tend to be even stronger bases due to the extreme weakness of their conjugate acids, which are stable hydrocarbons, amines, and dihydrogen. Usually, these bases are created by adding pure alkali metals such as sodium into the conjugate acid. They are called superbases, and it is impossible to keep them in water solution because they are stronger bases than the hydroxide ion. As such, they deprotonate conjugate acid water. For example, the ethoxide ion (the conjugate base of ethanol) in the presence of water undergoes this reaction. 2O → CH 2OH + OH− Examples of common superbases are: - Butyl lithium (n-C4H9Li) - Lithium diisopropylamide (LDA) [(CH3)2CH]2NLi - Lithium diethylamide (LDEA) (C - Sodium amide (NaNH2) - Sodium hydride (NaH) - Lithium bis(trimethylsilyl)amide [(CH Strongest superbases were only synthesised in gas phase: - Ortho-diethynylbenzene dianion (C6H4(C2)2)2− (This is the strongest superbase ever synthesized) - Meta-diethynylbenzene dianion (C6H4(C2)2)2− (second strongest superbase) - Para-diethynylbenzene dianion (C6H4(C2)2)2− (third strongest) - Lithium monoxide anion (LiO−) was considered the strongest superbase before diethynylbenzene dianions were created. When a neutral base forms a bond with a neutral acid, a condition of electric stress occurs. The acid and the base share the electron pair that formerly only belonged to the base. As a result, a high dipole moment is created, which can only be destroyed by rearranging the molecules. Examples of solid bases include: - Oxide mixtures: SiO2, Al2O3; MgO, SiO2; CaO, SiO2 - Mounted bases: LiCO3 on silica; NR3, NH3, KNH2 on alumina; NaOH, KOH mounted on silica on alumina - Inorganic chemicals: BaO, KNaCO3, BeO, MgO, CaO, KCN - Anion exchange resins - Charcoal that has been treated at 900 degrees Celsius or activates with N2O, NH3, ZnCl2-NH4Cl-CO2 Depending on a solid surface's ability to successfully form a conjugate base by absorbing an electrically neutral acid, the basic strength of the surface is determined. "The number of basic sites per unit surface area of the solid" is used to express how much base is found on a solid base catalyst. Scientists have developed two methods to measure the amount of basic sites: titration with benzoic acid using indicators and gaseous acid adsorption. A solid with enough basic strength will absorb an electrically neutral acid indicator and cause the acid indicator's color to change to the color of its conjugate base. When performing the gaseous acid adsorption method, nitric oxide is used. The basic sites are then determined using the amount of carbon dioxide than is absorbed. Bases as catalysts Basic substances can be used as insoluble heterogeneous catalysts for chemical reactions. Some examples are metal oxides such as magnesium oxide, calcium oxide, and barium oxide as well as potassium fluoride on alumina and some zeolites. Many transition metals make good catalysts, many of which form basic substances. Basic catalysts have been used for hydrogenations, the migration of double bonds, in the Meerwein-Ponndorf-Verley reduction, the Michael reaction, and many other reactions. Both CaO and BaO can be highly active catalysts if they are treated with high temperature heat. Uses of bases - Sodium hydroxide is used in the manufacture of soap, paper, and the synthetic fiber rayon. - Calcium hydroxide (slaked lime) is used in the manufacture of bleaching powder. - Calcium hydroxide is also used to clean the sulfur dioxide, which is caused by the exhaust, that is found in power plants and factories. - Magnesium hydroxide is used as an 'antacid' to neutralize excess acid in the stomach and cure indigestion. - Sodium carbonate is used as washing soda and for softening hard water. - Sodium bicarbonate (or sodium hydrogen carbonate) is used as baking soda in cooking food, for making baking powders, as an antacid to cure indigestion and in soda acid fire extinguisher. - Ammonium hydroxide is used to remove grease stains from clothes Acidity of bases The number of ionizable hydroxide (OH-) ions present in one molecule of base is called the acidity of bases. On the basis of acidity bases can be classified into three types: monoacidic, diacidic and triacidic. Etymology of the term The concept of base stems from an older alchemical notion of "the matrix": The term "base" appears to have been first used in 1717 by the French chemist, Louis Lémery, as a synonym for the older Paracelsian term "matrix." In keeping with 16th-century animism, Paracelsus had postulated that naturally occurring salts grew within the earth as a result of a universal acid or seminal principle having impregnated an earthy matrix or womb. ... Its modern meaning and general introduction into the chemical vocabulary, however, is usually attributed to the French chemist, Guillaume-François Rouelle. ... In 1754 Rouelle explicitly defined a neutral salt as the product formed by the union of an acid with any substance, be it a water-soluble alkali, a volatile alkali, an absorbent earth, a metal, or an oil, capable of serving as "a base" for the salt "by giving it a concrete or solid form." Most acids known in the 18th century were volatile liquids or "spirits" capable of distillation, whereas salts, by their very nature, were crystalline solids. Hence it was the substance that neutralized the acid which supposedly destroyed the volatility or spirit of the acid and which imparted the property of solidity (i.e., gave a concrete base) to the resulting salt.— William Jensen, The origin of the term "base" |Look up base in Wiktionary, the free dictionary.| - Johll, Matthew E. (2009). Investigating chemistry: a forensic science perspective (2nd ed.). New York: W. H. Freeman and Co. ISBN 1429209895. OCLC 392223218. - Lewis, Gilbert N. (1938). "Acids and Bases" (PDF). Journal of the Franklin Institute. pp. 293–313. Retrieved 19 February 2015. - Whitten et al. (2009), p. 363. - Zumdahl & DeCoste (2013), p. 257. - Whitten et al. (2009), p. 349. - "Definition of BASE". www.merriam-webster.com. Archived from the original on 21 March 2018. Retrieved 3 May 2018. - Zumdahl & DeCoste (2013), p. 258. - Zumdahl & DeCoste (2013), p. 255. - Zumdahl & DeCoste (2013), p. 256. - Tanabe, Kozo (1970). Solid Acids and Bases: their catalytic properties. Academic Press. p. 2. Retrieved 19 February 2015. - Tanabe, K.; Misono, M.; Ono, Y.; Hattori, H. (1990). New Solid Acids and Bases: their catalytic properties. Elsevier. p. 14. Retrieved 19 February 2015. - "Electrophile - Nucleophile - Basicity - Acidity - pH Scale". City Collegiate. Archived from the original on 30 June 2010. Retrieved 20 June 2016. - "What is TRIACIDIC? definition of TRIACIDIC (Science Dictionary)". Science Dictionary. 14 September 2013. Retrieved 14 March 2019. - "Introduction to Bases: Classification, Examples with Questions & Videos". Toppr-guides. 2 February 2018. Retrieved 14 March 2019. - Jensen, William B. (2006). "The origin of the term 'base'" (PDF). The Journal of Chemical Education. 83 (8): 1130. Bibcode:2006JChEd..83.1130J. doi:10.1021/ed083p1130. Archived from the original (PDF) on 4 March 2016.
Music From Italy Music is the creative art of arranging different sounds in melodic time to create a meaningful composition in music through the components of rhythm, melody, balance, and timbre in accordance with what the composer wants to achieve. It is probably one of the most universal artistic cultural aspects of all mankind’s cultures. Music has evolved over the course of history from simple drum beats and animal grunts to a sophisticated symphony of Western and Eastern musical styles. Because music involves the human mind and can express thoughts and emotions, it is an important component of culture that has played a crucial role in the shaping of society. Among the most well-known forms of music are classical music, which predate the birth of most other forms of Western music by thousands of years; Romanticism, which depicts the romantic side of European culture in the works of key composers; and Post-Impressionism, which are the aesthetic school of art associated with changes in painting style brought about by the New French movement. Baroque is a term referring to the oldest known musical instrument, the lute, which was used in early Italian music. It had very simple mechanisms and required a wooden body and a string. The earliest known version of the baroque lute was written in 1507 in Venice. The term “Baroque” is sometimes used to describe any type of early Italian musical instrument, though, and is sometimes used today to refer loosely to any type of Western music featuring complex rhythmic patterns. Some types of early Italian music, like the soprano voice or the tenor flute, are the ancestor of modern day blues music. Other early Italian composers of similar genres include Horacek, Britten, Bussorelle, and Mantovino. Early Jewish composers include Munkatha, Vilbo, ben Hurrica, and Zemli. All of these artists made contributions to the world of music, yet their locations, genres, and musical influences are hazy.
How did the civil rights movement affect the world? The civil rights movement was an empowering yet precarious time for Black Americans. The efforts of civil rights activists and countless protesters of all races brought about legislation to end segregation, Black voter suppression and discriminatory employment and housing practices. What is Title 1 of the Civil Rights Act? Title I calls for any qualifications for voter registration to be applied equally to all, prohibits a voter from being rejected for non-material errors on an application, and outlines specific requirements for literacy tests. This newspaper article from 1901 summarizes the history of voting rights laws up to that time. What is the meaning of Civil Rights Act? The Civil Rights Act of 1964 is landmark federal legislation that prohibits discrimination on the basis of race, color, religion, sex and national origin. Johnson, the Civil Rights Act of 1964 granted equal access to employment, schools and public spaces. How did the civil rights movement affect democracy? To conclude, the civil rights movement reinforces American democracy because people have freedom and equality, power to make change happen, and can politically participate in elections. The actions of some Americans who tried to preserve white supremacy and keep African Americans racially inferior undermined democracy. How many titles are there in the Civil Rights Act? How did the Civil Rights Act of 1964 affect schools? Title IV of the Civil Rights Act of 1964 prohibits discrimination in public schools because of race, color, religion, sex, or national origin. Public schools include elementary schools, secondary schools and public colleges and universities. Who did Martin Luther King inspire? What did the government do during the civil rights movement? The movement helped spawn a national crisis that forced intervention by the federal government to overturn segregation laws in southern states, restore voting rights for African-Americans, and end legal discrimination in housing, education and employment. Who opposed the 1964 Civil Rights Act? Democrats and Republicans from the Southern states opposed the bill and led an unsuccessful 83-day filibuster, including Senators Albert Gore, Sr. (D-TN) and J. William Fulbright (D-AR), as well as Senator Robert Byrd (D-WV), who personally filibustered for 14 hours straight. What does Title XI of the Civil Rights Act do? Title IX is a federal civil rights law in the United States of America that was passed as part (Title IX) of the Education Amendments of 1972. It prohibits sex-based discrimination in any school or other education program that receives federal money. What caused the 1964 Civil Rights Act? Forty-five years ago today, President Lyndon Johnson signed the Civil Rights Act of 1964 into law. Board of Education, which held that racially segregated public schools were unconstitutional, sparked the civil rights movement’s push toward desegregation and equal rights. What organizations did MLK found? Martin Luther King’s civil rights organization, the Southern Christian Leadership Conference, or SCLC, lead efforts to help the poor and young African Americans in Memphis. What civil rights organization did Martin Luther King Jr help found? the Southern Christian Leadership Conference (SCLC) What was the Equal Opportunity Act of 1964? This act, signed into law by President Lyndon Johnson on July 2, 1964, prohibited discrimination in public places, provided for the integration of schools and other public facilities, made employment discrimination illegal, and enforced the constitutional right to vote. How did Martin Luther King help the world? led a civil rights movement that focused on nonviolent protest. Martin Luther King’s vision of equality and civil disobedience changed the world for his children and the children of all oppressed people. He changed the lives of African Americans in his time and subsequent decades. Does segregation in schools still exist? This decision was subsequently overturned in 1954, when the Supreme Court ruling in Brown v. Board of Education ended de jure segregation in the United States. In response to pressures to desegregate in the public school system, some white communities started private segregated schools, but rulings in Green v.