content
stringlengths 275
370k
|
---|
Microsoft Word 2019/2016 Part 1: Foundations
This course is intended for students who want to learn basic Word skills, such as creating, editing, and formatting documents; inserting simple tables and creating lists; and employing a variety of techniques for improving the appearance and accuracy of document content.
At Course Completion
In this course, you will learn fundamental Microsoft® Word skills.
- Navigate and perform common tasks in Word, such as opening, viewing, editing, saving, and printing documents, and configuring the application.
- Format text and paragraphs.
- Perform repetitive operations efficiently using tools such as Find and Replace, Format Painter, and Styles.
- Enhance lists by sorting, renumbering, and customizing list styles.
- Create and format tables.
- Insert graphic objects into a document, including symbols, special characters, illustrations, pictures, and clip art.
- Format the overall appearance of a page through page borders and colors, watermarks, headers and footers, and page layout.
- Use Word features to help identify and correct problems with spelling, grammar, readability, and accessibility.
To ensure your success in this course, you should have end-user skills with any current version of Windows®, including being able to start programs, switch between programs, locate saved files, close programs, and access websites using a web browser.
Lesson 1: Getting Started with Word
- Navigate in Microsoft Word
- Create and Save Word Documents
- Manage Your Workspace
- Edit Documents
- Preview and Print Document
- Customize the Word Environment
Lesson 2: Formatting Text and Paragraphs
- Apply Character Formatting
- Control Paragraph Layout
- Align Text Using Tabs
- Display Text in Bulleted or Numbered Lists
- Apply Borders and Shading
Lesson 3: Working More Efficiently
- Make Repetitive Edits
- Apply Repetitive Formatting
- Use Styles to Streamline Repetitive Formatting Tasks
Lesson 4: Managing Lists
- Sort a List
- Format a List
Lesson 5: Adding Tables
- Insert a Table
- Modify a Table
- Format a Table
- Convert Text to a Table
Lesson 6: Inserting Graphic Objects
- Inset Symbols and Special Characters
- Add Images to a Document
Lesson 7: Controlling Page Appearance
- Apply a Page Border and Color
- Add Headers and Footers
- Control Page Layout
- Add a Watermark
Lesson 8: Preparing to Publish a Document
- Check Spelling, Grammar, and Readability
- Use Research Tools
- Check Accessibility
- Save a Document to Other Formats
|Course Dates||Course Times (EST)||Delivery Mode||GTR|
|1/29/2021 - 1/29/2021||8:45 AM - 4:00 PM||Virtual||Enroll| |
Clostridium difficile is a spore-forming, Gram-positive anaerobic bacillus that produces two toxins which damage intestinal cells and cause inflammation in the gut, resulting in mild to severe diarrhea, but can lead to severe dehydration and hospitalization. More than 500,000 people are infected each year, resulting in approximately 15,000 deaths.
While C.diff is very contagious, the bacteria does not go airborne, instead it is spread by a person touching a contaminated surface or ingesting spores. C. difficile spores are difficult to eliminate and are not destroyed by alcohol-based hand sanitizers.
Preventing C. diff Infections
The following precautions can help keep you -- and others -- safe:
Wash your hands with soap and water frequently. Do not rely on alcohol-based hand sanitizers as they do not destroy C. diff spores.
Clean surfaces and disinfect in bathrooms and kitchens regularly with chlorine bleach-based products (Make sure they have a C.diff claim).
Wash soiled clothing with detergent and chlorine bleach.
Disinfecting and cleaning areas affected with C. diff |
A. P. Tureaud
A.P. Tureaud was a key legal activist in an era of vigorous challenges to Jim Crow in twentieth-century Louisiana.
AAlexander Pierre (“A.P.”) Tureaud was a key legal activist in an era of vigorous challenges to Jim Crow in twentieth-century Louisiana. From the beginning of his legal career in New Orleans in the 1920s until his death in 1972, Tureaud directed the most substantive assaults on racial segregation in Louisiana’s history. Though largely unsung, Tureaud was a student of Charles Hamilton Houston at Howard University and an associate of Thurgood Marshall. His legal victories played an important role in the modern civil rights movement nationwide.
A descendant of free people of color, Tureaud was born February 26, 1899, in New Orleans. He attended elementary and high school in New Orleans, then moved to Chicago, Illinois, and Annapolis, Maryland, in the late 1910s. In 1921, he entered Howard University Law School in Washington, D.C. In 1922, Tureaud attended a lecture by James Weldon Johnson, then the executive secretary of the National Association for the Advancement of Colored People (NAACP). Tureaud immediately joined the organization. He graduated from Howard Law School in 1925, but due to the whites-only admissions policies in Louisiana’s law schools, Tureaud was one of fewer than twenty practicing African American attorneys in Louisiana until the 1950s.
Racial discrimination in Louisiana was most acute in its educational system. Pronounced racial bias in teacher pay and woefully inadequate schools for black children reinforced the deeply rooted black poverty in the state. For instance, historian Adam Fairclough finds that between 1935 and 1945 the median expense of white children’s education was $56 per year per child, compared to $14 per year per black child. Similarly, a study in the 1940s funded by the NAACP found that in Louisiana African American teachers were paid 64 percent less than white teachers. The academic year in African American schools also averaged thirty-seven days shorter than that of white schools. Until the 1950s, instruction in African American schools was confined to the elementary level, and most black children had few opportunities to attend school after age fourteen. Tureaud exposed these most glaring inequities in Louisiana’s segregated school system and targeted them for an eventual resolution by the United States Supreme Court.
In the 1930s, Tureaud joined the NAACP Legal Defense Fund, Inc., and filed many lawsuits to force Louisiana to enforce the separate-but-equal doctrine established in Plessy v. Ferguson (1896). The strategy was to compel the state first to fund African American schools equally and—when this became too expensive—then persuade the state to do away with segregation altogether. With the assistance of Thurgood Marshall, the lead attorney in the NAACP Legal Defense Fund, Tureaud earned an early victory in the Joseph P. McKelpin v. Orleans Parish School Board (1940). Serving as special counsel, Tureaud argued for salary equity for African American and white teachers. He launched a series of lawsuits in the 1940s and 1950s, notably Willie Robinson v. LSU Board of Supervisors and Bush v. Orleans Parish School Board, that resulted in the desegregation of Louisiana State University and the Orleans Parish School District.
He continued his work toward the desegregation of public education facilities and the equality of political rights for African Americans. In the 1950s, Tureaud was a founding member of the Louis Martinet Legal Society, a still-extant legal organization that addresses racial discrimination and civil rights violations in the state. In the 1960s, Tureaud mentored several young minds and encouraged them to continue the struggle to end segregation. In 1977, one of Tureaud’s law partners, Ernest “Dutch” Morial, became the first black mayor of New Orleans. Tureaud died of cancer on January 22, 1972. A major thoroughfare, a memorial park, and an elementary school have been named in his honor in New Orleans. |
Featured image: The south pole of Mars as seen by the HRSC Camera onboard the European Space Agency’s Mars Express mission. Image credit: ESA/DLR/FU Berlin.
Authors: Sebastian Emanuel Lauro, Elena Pettinelli, Graziella Caprarelli, Luca Guallini, Angelo Pio Rossi, Elisabetta Mattei, Barbara Cosciotti, Andrea Cicchetti, Francesco Soldovieri, Marco Cartacci, Federico Di Paolo, Raffaella Noschese and Roberto Orosei.
“Water, water everywhere, but not a drop to drink”- or at least that might be the case beneath the south pole of Mars. In 2018, a team of scientists reported a potential subsurface lake of liquid water 1.5 km beneath the Martian south polar cap. Now, using more observations as well as new analysis methods previously used for ice sheets on Earth, the same team presents new evidence for a large subsurface lake as well as three other lakes in the same area. This raises further questions about how such lakes could be kept liquid in the cold environment of Mars, and whether they could provide a habitable environment for astrobiology.
The new paper, led by Sebastian Emanuel Lauro and recently published in Nature Astronomy, explored a 250 x 300 km2 area known as Ultimi Scopuli—the same area where a potential lake was reported in 2018. The data comes from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument onboard the European Space Agency’s Mars Express orbiter, which uses bursts of radio waves to look at the structure of rocks and ice beneath the surface of Mars. The dataset in the new paper consists of 134 observations taken from 2010 to 2019, compared to the 29 observations used in their 2018 paper. The new data both strengthens the claim of a large subglacial lake roughly 20 x 30 km in size, and suggests the presence of an additional three other smaller underground lakes nearby, each separated by dry ice or rock.
In planetary science, it’s common for researchers to compare what we find elsewhere in the solar system to what we see on Earth, as this helps us ground analysis in something that we can measure more directly than extraterrestrial locations. So not only does this new research use more observations, but the team also makes use of a newanalysis technique previously used to detect subglacial lakes on Earth. A new radar detection method, developed to look for lakes beneath the Greenland ice sheet, uses the ‘acuity’ of the radar signal, which is a specific measure of surface roughness. When a high acuity value (indicating a smooth surface) is seen alongside high reflectivity (the strength of the returned radar pulses), it provides a more robust indication of ponded water. This is what Lauro et al. have seen in their investigations of the Mars south polar cap.
It’s still possible that what we’re seeing evidence of is something more like wet sediments, essentially subglacial slush, rather than true lakes of water. However, the Lauro et al. team looked carefully at the interfaces between wet and dry materials, and argue that the reflectivity values are similar to those used to detect subglacial lakes in Greenland, and rise above the threshold for indicating lakes. They also suggest that subglacial lakes could actually be very widespread beneath the south polar cap, although MARSIS is unable to detect them.
Other researchers outside the Lauro et al. research group have suggested that some kind of geothermal activity is needed to heat the water and therefore keep it liquid. Lauro et al. however, argue that geothermal activity isn’t necessary, because recent experiments have shown that certain briny (salty) solutions can stay liquid at temperatures as low as 150 degrees Kelvin, (about -123 degrees Celsius). At the surface of Ultimi Scopuli, the temperature is thought to be roughly 160 degrees Kelvin (~ -113 degrees Celsius), and is expected to increase with depth. The team therefore argues that super salty water could stay liquid without the need for geothermal activity to keep it warm. We already know such salts exist in large quantities on Mars because they have been found by previous missions such as the Phoenix Lander and the Curiosity Rover, so it’s plausible that they could also be in any water beneath the south polar cap.
Why is liquid water beneath the south polar cap on Mars so interesting, if we can’t even drink it? On Earth, subglacial lakes in Antarctica are home to interesting forms of microbial life. Organisms known as extremophiles, able to survive extreme conditions, are known to exist in extremely salty lakes. So, perhaps similar life forms could exist in these lakes on Mars.
Water, but not a drop to drink: multiple salty lakes beneath the south pole of Mars? by Eleni Ravanis is licensed under a Creative Commons Attribution 4.0 International License. |
Foot Soldiers of the Civil Rights Movement
Growing up as a black woman in America, you learn very early on that we face a triple barrier: race, gender and class. We also carry the burden of slavery, rape, lynching and other atrocities, while trying to maintain family ties in a America that has historically depicted us as childlike, aggressive, hypersexual and violent. The result of that construct and the accompanying racist fears and forced subjugation it justifies has been counterintuitive: black women in America are caring, loving and talented human beings who live their lives constantly at risk from the greater world. As they worked hard to escape stereotypes in the fight for justice and equality, these women activists rose above adversity and opened doors for people of all races.
Although black women are the core of organized African American life, their influences within American culture has often gone unrecognized as they challenged systems of racism and discrimination. In that regard, you could say that all black women are activists. Ordinary women who get up every day, went to jobs that they were overqualified for and underpaid to do, working to keep their families intact, investing in their children’s future, and often under the most duress circumstances, represent the many faceless and nameless women who did all they could to create a path for the next generation. If this is the case, then all black women in this book can be defined as activists. Our mothers, grandmothers, aunts, sisters — all of our womanly ancestors — helped make America what it is today. They were the foot soldiers of the civil rights movement. Willingly, they answered the call.
Our writers can help you with any type of essay. For any subjectGet your price
How it works
Building on existing networks of kin and friendship embedded in local institutions such as clubs and churches, black women knew who to turn to and how to get things done. This kind of local knowledge from “everyday kind of people” is how social movements organize at a grassroots level. Recruiting friends and relatives through existing networks, meeting in safe spaces such as beauty parlors and at sorority meetings, and building support using door-to-door political canvassing, women quickly found themselves on the front lines of boycotts, voter registration drives, demonstrations, and even acts of civil disobedience that landed them in jail. Still, they persisted.
In time, historians began reshaping the story of civil rights by focusing not only its national leaders but also on grassroots activists. Not surprisingly, there is a gender dimension to this reframing: Most of the national leaders of the movement were men, but the energy and support at the grassroots level was often supplied by women leaders who laid the groundwork for the civil rights revolution, but were hushed in historical texts.
[bookmark: _Hlk527202177]When we think of the early known black woman activists, we often turn to Sojourner Truth and Harriet Tubman. Truth and Tubman were activists who were distinctly different, but shared common ground in that they were both born into slavery, neither could read or write, yet they each managed to turn this nation upside down with their individual stories of trial and triumph. Truth worked to abolish slavery, promote equal rights for women, and eradicate the use of alcohol among men and women, while Tubman, often referred to as the “”Black Moses”” of the Underground Railroad, dedicated her life to creating safe passage for slaves to escape to freedom. But they did have major philosophical differences between them regarding slavery.
Truth was patient and an obedient slave. Once freed, she looked at the big picture and believed that slavery could end peacefully by moral persuasion, but it would take time. She became a willing activist, giving antislavery speeches at abolition meetings and women’s rights conventions to build sentiment against the institution. Truth fought within the system, using hymns and songs to relax a hostile audience before she gave a speech, and when she spoke of the emotional suffering she endured being auctioned off from her family to another slaveowner, she moved the audience to tears. During the Civil War, when she met President Lincoln in 1864, Truth told him he was doing a good job.
In contrast, Tubman was disobedient and fought back when she was beaten by her slaveowner. She had no patience with slavery. As a freed woman, she undertook a personal crusade against slavery by helping family members escape and later, anyone willing to travel the Underground Railroad to freedom. Unlike Truth, Tubman fought outside the system using hymns and songs as a code to alert the slaves it was time to leave for the north. She also carried a revolver, which she pointed at the head of any tired runaway slave who wanted to turn back. During the Civil War, Tubman believed President Lincoln was dragging his feet about freeing the slaves, and she didn’t want to meet him.
According to Carlton Mabee’s book, Sojourner Truth: Slave, Prophet, Legend, Truth and Tubman finally met in Boston in August of 1864 and he wrote, “”Truth tried to persuade Tubman that (Abraham) Lincoln was a real friend to blacks, but Tubman insisted he was not because he allowed black soldiers to be paid less than white soldiers.”” [CITE]
Truth and Tubman’s different backgrounds and ideologies truly reflect the wide range of approaches that have affected the civil right movement in a myriad of ways. Their stories and others like them not only help deepen our understanding of the movement as a whole, it deconstructs the false narratives that suggest that black women either played subservient roles in movements for liberation and resistance, or were absent altogether.
The early years of the women’s rights movement date back to 1848 when for the first time small groups of women who had been working individually joined together in the National Women’s Rights Convention in Seneca Falls, New York, where nearly 300 people were in attendance. Here they laid out a list of rights that women did not enjoy at the time such as the right to attend college, own property or enter male dominated professions such as medicine and law. But their worst offense was that they rendered nearly invisible the black women who labored in the suffragist vineyard, and looked away from the racism that tightened its grip on the fight for the women’s vote in the years after the Civil War. Frederick Douglass was invited to speak. No black women were in attendance.
[bookmark: _Hlk527222532]During the antebellum era, black women abolitionists like Truth, Mary Ann Shadd Cary, Sarah Redmond, Harriet Forten Purvis, and Margaretta Forten, supported universal suffrage at a time when the power of the vote was exclusively for white men. Universal suffrage continued to be the goal after the Civil War. During the 1860s, divisions about strategy erupted over the Fourteenth and Fifteenth Amendments that extended political power to black men but not to women. The suffrage struggle itself took on a similar flavor, acquiescing to white supremacy with the belief that white woman would be degraded if black men preceded them into the franchise. There was also a deep, patriarchal strain that came during the postbellum period where black men argued that since they had to protect their women and children, they needed to lead. While many black women found this offensive and saw this as nothing more than sexism and misogyny, some women found it heartening to be protected from outside forces that were trying to exploit them. It is why women like the poet Francis Ellen Watkins Harper supported the “Negro Suffrage” side of the argument, agreeing with Frederick Douglass that it was more important to support the enfranchisement of black men over black women. This position was also supported by the American Woman Suffrage Association (AWSA) a white organization that attracted some black women to its ranks. This group focused on state legislatures, and some black women suffragists like Watkins Harper served as a state delegate from Pennsylvania at national conventions. But staunch suffragists like Anna Julia Cooper, were particularly effective in emphasizing to black women that they required the ballot to counter the belief that “black men’s” experiences and needs were the same as theirs.[CITE]
Toward the end of the nineteenth century, many black women joined local women’s clubs and further organized at the national level in order to accomplish their aims for change and reform. In 1896, the National Federation of Afro-American Women merged with the National League of Colored Women to form the National Association of Colored Women (NACW), with activist Mary Church Terrell as its first president. Unwelcome in the mainstream suffrage movement, black women’s clubs and organizations became central to their support of women’s suffrage because they believed in the power of the vote. Club members organized voter education campaigns in their communities, circulated petitions calling for women’s suffrage, and worked in political campaigns to obtain the ballot. By 1916, the NACW passed a resolution in support of the woman suffrage amendment, and its vehicle for action was the Equal Suffrage League, which mobilized clubs nationwide to support the movement.
By the early 1900s, luminaries like Terrell and the noted anti-lynching crusader Ida B. Wells-Barnett became more deeply and publicly engaged. Their participation, along with others in the civil rights movement was encouraged and facilitated by a number of factors. One of three amendments passed during Reconstruction to abolish slavery and establish civil and legal rights for African Americans, the Fourteenth Amendment and ratified in 1868, granted citizenship to all persons born or naturalized in the United States—including former slaves—and guaranteed all citizens “equal protection of the laws.” This clearly repudiated the Supreme Court’s notorious 1857 Dred Scott v. Sandford decision, in which Chief Justice Roger Taney wrote that a black man, even if born free, could not claim rights of citizenship under the federal constitution. Unfortunately, in Plessy v. Ferguson (1896), the Court ruled that racially segregated public facilities did not violate the equal protection clause of the Fourteenth Amendment, a decision that helped establish Jim Crow laws throughout the South for decades to come. The ratification of the Constitution together with the Plessy court decision became the impetus for civil rights activism. Many women became active in local chapters of the National Association for the Advancement of Colored People (NAACP), an organization founded in 1909 that took the lead in raising public awareness of lynchings. Besides creating organizations and clubs, they filed lawsuits, participated in boycotts, wrote articles, published books, and in some cases became persuasive speakers who traveled nationally and internationally on lecture tours. Because of their outspoken and divergent viewpoints from both the civil rights and suffrage movements, black women activists faced regular public disapproval, even from black men. Yet under severe restrictions, these new leading voices would pave the way for social and economic justice.
The United States gave women equal voting rights in all states when the Nineteenth Amendment ratified in 1920. Black men, who had been given the vote after the ratification of the Fifteenth Amendment in 1870, had already been disenfranchised due to paramilitary violence, and federal- and state-sanctioned voter discrimination laws that included voting taxes and literacy tests. These actions eventually precluded black women from voting as well. In the meantime, while former white suffragists from the North celebrated the vote, they were uninterested in fighting discrimination against women who were suffering racial discrimination, as white supremacy reigned victorious throughout the South. It would take another half-century — and a civil rights movement with black women in supporting roles — before the black community would become fully enfranchised, through the Voting Rights Act of 1965.
Another organization, the National Council of Negro Women (NCNW), was founded in 1935 by civil rights activist Mary McLeod Bethune, with the aim of improving the quality of life for African-American women and families. NCNW still exists today as a non-profit organization that reaches out through research, advocacy, and social services in the United States and Africa. In 1946 Mary Fair Burks founded the Women’s Political Council (WPC) as a response to discrimination in the Montgomery League of Women Voters, which refused to allow African-American women to join. The WPC not only sought to improve social services for the African-American community, they are famously known for instigating and supporting the Montgomery Bus Boycott. Black women were relentless in their attempts to make meaningful engagement with the suffrage movement, not only because they believed in the cause but because they knew it was important that they were present and fighting for their rights both as women and African Americans.
Activists like Ella Baker represented another path to activism. A strong and principled woman who worked as a field organizer for the NAACP in the 1940s, by the end of World War II, Baker’s efforts helped the NAACP grow its membership from 50,000 to over 450,000 members, with its largest expansion In the South. When Martin Luther King Jr. founded the Southern Christian Leadership Conference (SCLC), Baker, who joined in 1957 and became its first “temporary” executive director, often bumped heads with the men in the organization, especially male ministers like King. When sit-ins swept the South that were mainly led by college students, she signed on with the newly organized Student Nonviolent Coordinating Committee (SNCC), where she served as a mentor to the rising generation of student activists. An impassioned believer in participatory democracy, Baker’s motto was “strong people don’t need strong leaders.”[CITE]
Throughout the South, black women were crucial to the civil rights movement, serving as organizational leaders. They protested, participated, sat in, mobilized, created, energized, led particular efforts, and served as bridge builders to the rest of the community. Ignored at the time by white politicians and the media alike, with few exceptions they worked behind the scenes to effect all of the changes the movement sought. Daisy Bates created an NAACP youth council in her hometown of Little Rock, Arkansas, and later provided emotional and physical support to the “Little Rock Nine” who desegregated the high school in 1957. Septima Poinsette Clark served as the director of workshops at the Highlander Folk School, and later spread the concept of citizenship education throughout the South. Highlander created the Citizenship Education Schools, a program Clark led which trained over 25,000 people, and played a major role in registering black voters across the South. Later, the program was transferred to the SCLC because the state of Tennessee was threatening to close the school. And then there was Diane Nash, who was instrumental in organizing and leading the Nashville sit-ins while a student at Fisk, and then went on to become active in SNCC. After being denied the right to register to vote in Ruleville, Mississippi, Fannie Lou Hamer became a field secretary for SNCC and the driving force behind the Mississippi Freedom Democratic Party’s challenge to the all-white state delegation chosen for the 1964 Democratic National Convention in Atlantic City.
This is why it is important to note that Rosa Parks’ popular image as a tired seamstress who refused to give up her bus seat to a white person, was far from the neophyte many believed her to be. For years, she had been active in her local NAACP, and attended workshops at the Highlander Folk School during the 1930s. It was Parks’ act of refusal that struck a chord with millions of black women who were tired of being verbally abused and physically assaulted. If it had been merely a protest about riding the bus, it might have shattered, but Parks’ action went to the very heart of black womanhood, and as a result, black women played a pivotal role in sustaining that movement into a yearlong boycott.
In fact, this iconography extends to all the women of the civil rights movement: Leaders like Baker were overlooked until scholarship was written and introduced by African American woman scholars. Myrlie Evers-Williams, Betty Shabazz, and Coretta Scott King are often seen as the dutiful wives and respectful widows who solely dedicated their lives to preserving their husbands’ legacies, when they achieved far more during and after their husbands’ lives. Angela Davis remains somewhat trapped in a time-machine bubble for her afro and clenched fists instead of the extensive scholarship she has created over the past decades on economic, social, racial, and gender justice. It is also not surprising that Pauli Murray, the gender non-conforming activist and legal scholar who coined the term “”Jane Crow”” for the sex discrimination black women faced, and was highly instrumental during the civil rights movement, is rarely mentioned. Murray worked with King, but was critical of the lack of female leadership in his movement. Her book, States’ Laws on Race and Color (1951), has been referred to as the “bible” of Brown v. Board of Education, the 1954 Supreme Court ruling which declared separate public schools for black and white students unconstitutional.
[bookmark: _Hlk527829631]But the civil rights movement was more than just a middle-of-the-road kind of movement, activists like Baker and Murray also had links to the nonaligned black Left, which expanded during the 1930s and 1940s in response to the ravages of the Depression as well as to heightened racial violence. Black women artists, writers, journalists, and activists including Grace Campbell, Esther Cooper Jackson, Louise Thompson Patterson, Marvel Cooke, Claudia Jones, Shirley Graham DuBois, Alice Childress, Lorraine Hansberry, and Charlotta Bass were among the group of radicals that historian Eric McDuffie calls “Black left feminists.” [CITE] Although not all of them would embrace the term “feminist,” they understood that black women faced the “triple oppression” of race, gender and class. Most adopted a global perspective that connected the fight for racial justice in the United States to liberation struggles in Africa, the Caribbean, and other Third World nations. They also joined a host of Left and labor organizations, including the Communist Party (USA), the Southern Negro Youth Congress, the National Negro Labor Council, the National Negro Congress, and the Civil Rights Congress, among others, to push through an equal rights agenda. These groups remained active largely in the North and to a lesser extent in the South, until anticommunism and Cold War repression decimated their numbers.
While black women have been essential leaders across social justice movements, the labor movement is no exception. Despite historical segregation that kept women and black workers out of some of the most powerful labor unions in the United States, black women like Maida Springer Kemp, Clara Day, Addie Wyatt, Hattie Canty and Johnnie Johnson who dedicated their lives to organizing were often ignored by the history books. They fought long and hard so that all working people can realize not just basic workplace rights, but a life of dignity and respect. Their efforts attracted thousands of women and people of color to unions, and laid the groundwork for the peak of the civil rights movement. They also worked closely with organizations such as The Urban League, SLCL, CORE and the NAACP, to support orchestrated legal challenges that would eventually produce victories like the Brown decision, the Civil Rights Act of 1964, and the Voting Rights Act of 1965. Black women worked to translate these and any other public victories into concrete local results and initiatives.
With all the work they did and for all they achieved, black women made little progress in convincing their male counterparts of their right to exercise full leadership in the civil rights movement. When the March on Washington in August 1963 was being planned, women were relied on as organizers, recruited as marchers, and featured as singers, but they were not granted a speaking voice at the March. Instead, their participation was to be relegated to A. Philip Randolph saying a few words about their contributions to the struggle, and then invite a group of women to take a bow. When the women fought to be recognized, the male leaders allowed Daisy Bates to read a speech that was written by the NAACP’s John Marshall. Indeed, this 142-word speech, which read more like a pledge to support male leadership, was the only words spoken by a woman at any length at the March. Rosa Parks got to say eight words. Josephine Baker, who was also on the dais, was not allowed to speak. Parks and the actress Lena Horne were sent back to their hotel because Horne was trying to get press coverage about Rosa Parks as the woman who started the civil rights movement, not Martin Luther King Jr. Clearly, the civil rights leaders, who relied on and coveted these women’s hard work, had great difficulty moving beyond their belief that women were second-class citizens. Not only did they ban the women from speaking, they directed them to march separately from and behind the men. After the March, when the male leaders made their way to the White House to meet with President Kennedy, the women were left behind. This exclusion created a moment of clarity.
Quiet as it has been kept, Bethune’s NCNW scheduled a debriefing the day after the March to discuss the treatment the women received both during the March as well as in the movement. [CITE] NCNW held a second meeting in November 1963, where Murray spoke of the key roles women have played in civil rights work, only to be rebuffed at the March. The bigger issue was that black women had to not only deal with the civil rights struggle, but they also had to address sex discrimination more aggressively from their own men. Soon after, the women began to tie their civil rights work into a feminist perspective.
By the late 1960s, the civil rights movement shifted gears, and a new generation of black women came to the fore to play an important and influential role in the growing black power movement. After the assassinations of Medgar Evers, Malcolm X and Martin Luther King Jr., the younger activists no longer saw nonviolent protests as a viable means of combatting racism. They believed that desegregation was insufficient and only through the deconstruction of white power structures could a space be made for black voices to give rise to a collective black power. While there were different strategies and tactics between the civil rights and black power movements, their fundamental goals often converged. For example, while the civil rights movement owes its standing to Brown, and the Civil Rights and Voting Rights Acts, it was black power activism that built the black political machines that actually got black people elected into office.
[bookmark: _Hlk533359268][bookmark: _Hlk527234284]This younger generation of women held leadership roles in various black nationalist organizations, including the Black Panther Party for Self-Defense, while at the same time fighting against the sexist ideologies of their male members. They worked towards bringing attention to issues of gender identity, classism, racism, and sexism. Notable leaders include Elaine Brown (the first Chairwoman of the Black Panther Party), Angela Davis (leader of the Communist Party USA), like Denise Oliver (Young Lords), Fran Beal (SNCC, Third World Women’s Alliance), Kathleen Cleaver (National Communications Secretary of the Black Panther Party), and Assata Shakur (member of the Black Liberation Army). All of these women were targeted by the United States government for their activism.
If the women’s suffrage movement emerged from the abolition movement, the women’s liberation movement grew out of the struggle for civil rights. When this second-wave feminist movement, led by Betty Friedan and later Gloria Steinman gathered steam in the late 1960s, many black women felt alienated by the main planks of the movement, which largely advocated for women’s right to work outside the home and expansion of reproductive rights. Earning the power to work outside the home was not seen as an accomplishment by black women since most of them had to work to support their families. But what frustrated them the most was that the women’s liberation movement continued to do what the suffrage movement did before them, which was hinder the involvement of minority women due to racist sentiments, coupled with a narrowly-oriented set of goals that favored white upper-and-middle class women.
As the black power movement went into decline in the late 1970s, many black women continued their fight within a growing black feminist movement. Newly formed organizations such as Third World Women’s Alliance and the National Black Feminist Organization (NBFO) sought to address issues unique to African-American women such as racism, sexism, and classism. Though the NBFO had disintegrated by 1977, another organization, which formed just a year after the NBFO in 1974, the Combahee River Collective, which included scholars and writers such as Cheryl Clarke, Gloria Akasha Hull, Audre Lorde and Barbara Smith, turned out to be one of the most important black feminist organizations of our time, as it argued that the liberation of black women would lead to freedom for all people. Perhaps the most notable piece to come out of the Combahee River Collective was the Combahee River Collective Statement, which helped feminists expand on ideas about identity politics.
Though invited to participate within the women’s liberation movement, many women of color cautioned against the single focus on sexism, finding it to be an incomplete analysis without the consideration of race or class, which impacted access to education, health care, housing, jobs, legal justice, with poverty and violence permeating their lives. Mobilization efforts on the part of Latinas, Native Americans, and Asian American women also challenged traditional feminist and civil rights organizations to broaden their representation to include an even wider diversity of women’s voices. Likewise, while many lesbians saw commonalities with women’s liberation through the goals of eponymous liberation from sex-based oppression, which included fighting against homophobia, others believed that the focus was too narrow to confront the issues they faced. By 1989, legal scholar Kimberlé Crenshaw coined the phrase “intersectionality,” which argues that the experience of being a black woman cannot be understood in terms of being black or of being a woman. Instead, Crenshaw argued that each concept should be considered independently while including how interacting identities frequently compound upon and reinforce one another.
Between 1970 and 1990 we witnessed the decline of the civil rights movement and the weakening of the push for the greater integration of blacks into mainstream American society. Several factors contributed to this development. First, there was the passing from the scene, mainly from the deaths of key civil rights leaders of the 1950s and 1960s. Those who replaced them generally lacked the leadership skills, talent, and charisma to capture and sustain a national movement. More importantly, it was the civil rights movement’s success in eliminating de jure discrimination in crucial areas, and getting many white Americans to see how racial discrimination violated the nation’s basic creed of equality of opportunity, which led many to believe that civil rights laws were well-settled. As a result, the interest of many African American civil rights groups and activists began to diminish as they began to give greater attention to taking advantage of the opportunities wrought by the success itself.
Women leadership also waned through either death, retirement, or changing paths and directions. By the early 1970s, Diane Nash removed herself from the national spotlight. Fannie Lou Hamer died of complications of hypertension and breast cancer at age fifty-nine in 1977. Pauli Murray died of cancer in 1985. Ella Baker worked until her death at age eighty-three in 1986. Elaine Brown returned to school and then lived in France for most of the 1990s, and then later returned to the U.S. to focus on prison reform. Once Angela Davis was acquitted, she completed her degrees, focused on scholarship by writing books, worked on social justice initiatives and continued teaching. And after Assata Shakur was convicted of the murder of State Trooper Werner Foerster in New Jersey, she escaped from prison while serving a life sentence for his murder in 1979, and has lived in exile in Cuba since 1984. This and other circumstances clearly left a void of black women activists that could not easily be filled.
Organizations that led the fight for civil rights and desegregation were floundering as well. By early 1967, SNCC was approaching bankruptcy as liberal funders refused to support its overt militancy. After leadership issues with Stokely Carmichael and H. Rap Brown (Jamil Abdullah Al-Amin), SNCC was no longer an effective organization, and largely disappeared in the early 1970s. After the assassination of Martin Luther King Jr. in 1968, the SCLC leadership was transferred to Ralph Abernathy, who presided until 1977. After Abernathy stepped down, the organization suffered from leadership woes, before reinventing itself as a national and international human rights organization. In the 1990s, the NAACP ran into debt and scandal, and was saved when Myrlie Evers-Williams stepped in as its president in 1995 to clean up the mess. Roy Innis, who served as National Chairman of CORE until his death in 2017, redirected the organization’s mission during the 1970s to support conservative political positions. Having suffered from government oppression, the killings and arrests of its members, which collided with internal conflict, the Black Panther Party’s demise occurred around 1982. And finally, the National Urban League, whose current president (as of this writing) is Marc Morial, former mayor of New Orleans, appears to be one of the few organizations able to transition smoothly into a new era as it continues its efforts to promote economic and political empowerment, as well as working to reduce violence and poverty in urban black America.
During the 1980s and 1990s, blacks faced affirmative-action backlash, a crack cocaine epidemic that had a devastating effect on black communities, and modern forms of social and judicial discrimination that resulted in blacks having the highest rates of incarceration of any minority group, especially in the South. On the other hand, more blacks moved into the American mainstream financially with the proportion of families with median incomes of $50,000 or more expanding from 5 percent of all black families in 1969 to 14 percent by 1990.[CITE] But while blacks were finally making substantial financial and political strides in the post-civil rights era, they found themselves facing resistance from white people who saw the advancement of African Americans as robbing them of their entitlement to middle-class privileges, which perpetuated a backlash atmosphere that advanced the conservative movement.
While national organizations lost their power and movements faded away, with the help of black local politicians, local grassroots initiatives sprang up around the country in black communities to rail against the crack cocaine epidemic and other social ills, like police brutality and housing discrimination. By the turn of the twenty-first century, the field of grassroots advocacy had exploded over the past decade. Previously, grassroots initiatives were defined as a “boots on the ground” strategy, which worked very well for black women activists in the past. Now, with the expansion of technology and social media, the incredible value in grassroots advocacy has superseded traditional organizations, and has expanded with professionals at the top of their game utilizing emails, tweets and social media shares.
Hire a verified expert to write you a 100% Plagiarism-Free paper
Cite this page
Foot Soldiers of the Civil Rights Movement. (2019, Dec 12). Retrieved from https://papersowl.com/examples/foot-soldiers-of-the-civil-rights-movement/ |
This week we’re running a series in collaboration with the Australian Red Cross Blood Service looking at blood: what it actually does, why we need it, and what happens when something goes wrong with the fluid that gives us life. Read other articles in the series here.
Blood is vitally important for our body. As it’s pumped around our body through veins and arteries, it transports oxygen from our lungs to all of the other organs, tissues and cells that need it. Blood also removes waste products from our organs and tissues, taking them to the liver and kidneys, where they’re removed from the body.
About 45% of our blood consists of different types of cells and the other 55% is plasma, a pale yellow fluid. Blood transports nutrients, hormones, proteins, vitamins and minerals around our body, suspended in the plasma. They provide energy to our cells and also signal for growth and tissue repair. The average adult has about five litres of blood.
The different types of blood cells include red blood cells, platelets, and white blood cells, and these are produced in the bone marrow, in the centre of our bones.
Red blood cells
Red blood cells are essential for transporting oxygen around the body. Red cells are very small, donut-shaped cells with an average lifespan of 120 days within the body. They contain a protein called haemoglobin, which contains iron and binds very strongly to oxygen, giving blood its red colour.
Red cells are flexible and able to squeeze through even the tiniest of our blood vessels, called capillaries, to deliver oxygen to all of the cells in our body. When the red cells reach our organs and tissues, haemoglobin releases the oxygen.
Platelets are even smaller than red blood cells. In fact, they are tiny fragments of another much larger type of cell, called a megakaryocyte, which is located in the bone marrow. Platelets are formed by budding off from the megakaryocyte. Platelets have an average lifespan of eight to 10 days within the body, so they are constantly being produced. When body tissue is damaged, chemicals are released that attract platelets.
Platelets clump together and stick to the damaged tissue, which starts to form a clot to stop bleeding. Many of the proteins that help the clot to form are contained in plasma. Platelets also release growth factors that help with tissue healing.
White blood cells
Blood also carries white blood cells, which are an essential part of our immune system. Some white cells are able to kill micro-organisms by engulfing and ingesting them. Other types of white cells, called lymphocytes, release antibodies that help to fight infection.
Blood cells don’t act alone; they work together for normal body function. For example, when we cut our skin, platelets help plug the cut to stop it bleeding, plasma delivers nutrients and clotting proteins, white cells help to prevent the cut from becoming infected, and red cells deliver oxygen to help keep the skin tissue healthy.
Sometimes patients who are having surgery, cancer treatment or when they are seriously injured need a blood transfusion. This is usually because they have lost a lot of platelets, red cells or plasma, or because their cancer treatment has killed many of their blood cells.
In Australia, blood is donated by voluntary blood donors at the Australian Red Cross Blood Service. A typical whole blood donation is just over 450 mL, and it takes around ten minutes to collect. Every time a donation is made, the donor is screened for infectious diseases such as hepatitis and HIV, so these aren’t transferred to the patient receiving the blood.
After donation, the blood is separated into its different parts: platelets, red cells and plasma, which are known as blood components. White cells are removed because they can cause problems in patients who receive them. Once the blood has been separated, it’s stored until it’s needed by hospitals. The red blood cells are stored in a refrigerator and the plasma is frozen. The red cells can be stored for six weeks, and the plasma can be stored for up to a year. Platelets can only be stored for five days. When a hospital needs blood it’s packed into special blood shippers, and transported to the hospital blood bank to be transfused |
Use this nonfiction comprehension worksheet to help second and third graders learn all about Misty Copeland, the first African American woman to become a principal dancer at the American Ballet Theatre.
Then what happened? In this activity, students will choose stop and jot sticky notes from different parts of the story to practice their sequencing and summarizing skills as they respond to questions about the literature.
Introduce students to the inspiring environmental activist Wangari Maathai. Children will read a short biography about the first African woman to win the Nobel Peace Prize and answer nonfiction comprehension questions about the text.
Essays, books, and scripts all have one thing in common—purposeful organization. Literary analysis worksheets show students how to craft the perfect essay, no matter the assignment. With reading activities, writing prompts, graphic organizers, reading logs, and more, students gain skills necessary to succeed in writing. Literary analysis worksheets take the struggle out of essay writing, so your child can focus. |
About the book
The pharynx is a special structure in the human body. The pharynx is a conical passage connecting the oral cavity and nasal cavity to the junction of the esophagus and trachea. Besides, there is another passage, called the Eustachian tube or pharyngotympanic tube, which links the nasopharynx to the middle ear and it allows air pressure in the middle ear cavity to be equalized. The pharynx moves food or water from the mouth to the esophagus and also moves air from the nasal and oral cavities to the larynx. The larynx is a triangle-shaped box that consists largely of cartilages with surrounding structures, which is essential for phonation. Therefore, the pharynx plays an important role involved in both the respiratory and digestive systems. In other words, the pharynx is an incredibly dynamic rendezvous site of gas(air), liquid(water), and solid(food).
Although the pharynx is relatively small compared to other organ systems, a person's basic life will be affected greatly without well functional pharynx. For example, obstructive sleep apnea syndrome, the most common sleep-related breathing disorder, is characterized by repetitive collapse and closing of the pharynx during sleep. The recurrent episodes of apneas or hypopneas during sleep may interfere with restorative sleep in combination with disturbances in blood oxygenation. And these are possible negative consequences to health and quality of life. Besides, there are some conditions that can cause the pharynx and larynx to not function properly, such as digestion, respiration, or phonation.
Based on these concepts, the book incorporates updated developments as well as future perspectives in the ever-expanding field of the pharynx. Besides, this book will be a great reference for otolaryngologists, pulmonologists, gastroenterologists, pediatricians, neurologists, rehabilitation physicians, speech-language pathologists, audiologists, specialists in sleep medicine, researchers in clinical and basic medicine, experts in science and technology. |
Habitat: Native to eastern North America and most commonly found in wet woods and swamps.
Description; The chokeberries (Aronia) are two species of deciduous shrubs.The chokeberries are often mistakenly called chokecherries, which is the common name for Prunus virginiana. Further adding to the ambiguity, there is a cultivar of Prunus virginiana named ‘Melanocarpa’ easily confused with Aronia melanocarpa. In fact, the two plants are only distantly related within the Rosaceae.
CLICK & SEE THE PICTURES
The two species are readily distinguished by their fruit color, from which the common names derive. The leaves are alternate, simple, and oblanceolate with crenate margins and pinnate venation; in autumn the leaves turn a bold red color.
Dark trichomes are present on the upper midrib surface. The flowers are small, with 5 petals and 5 sepals, and produced in corymbs of 10-25 together. Hypanthium is urn-shaped. The fruit is a small pome, with a very astringent, bitter flavor; it is eaten by birds (birds do not taste astringency and feed on them readily), which then disperse the seeds in their droppings. The name “chokeberry” comes from the astringency of the fruits which are inedible when raw.
Chokeberries are very high in antioxidant pigment compounds, like anthocyanins. They share this property with chokecherries, further contributing to confusion.
Aronia is closely related to Photinia, and has been included in that genus in some classifications (Robertson et al. 1991).
Red chokeberry,(http://www.illinoiswildflowers.info/savanna/plants/bl_chokeberry.htm) Aronia arbutifolia, grows to 2-4 m tall, rarely up to 6 m. Leaves are 5-8 cm long and densely pubescent on the underside. The flowers are white or pale pink, 1 cm diameter, with glandular sepals. The fruit is red, 4-10 mm diameter, persisting into winter.
Black chokeberry, Aronia melanocarpa, tends to be smaller, rarely exceeding 1 m tall, rarely 3 m, and spreads readily by root sprouts. The leaves are smaller, not more than 6 cm long, with terminal glands on leaf teeth and a glabrous underside. The flowers are white, 1.5 cm diameter, with glabrous sepals. The fruit is black, 6-9 mm diameter, not persisting into winter.
Click to see The Berries Gallery:
.The two species can hybridise, giving the Purple Chokeberry, Aronia x prunifolia. Leaves are moderately pubescent on the underside. Few to no glands are present on the sepal surface. The fruit is dark purple to black, 7-10 mm in diameter, not persisting into winter.
The chokeberries are attractive ornamental plants for gardens. They are naturally understory and woodland edge plants, and grow well when planted under trees. Chokeberries are resistant to drought, insects, pollution, and disease. Several cultivars have been developed for garden planting, including A. arbutifolia ‘Brilliant’, selected for its striking fall leaf color, and A. melanocarpa ‘Viking’ and ‘Nero’, selected for larger fruit suitable for jam-making.Juice from these berries is astringent and not sweet, but high in vitamin C and antioxidants. The berries can be used to make wine or jam after cooking. Aronia is also used as a flavoring or colorant for beverages or yogurts.
The red chokeberry’s fruit is more palatable and can be eaten raw. It has a sweeter flavor than the black species and is used to make jam or pemmican.
Aronia melanocarpa (black chokeberry) has attracted scientific interest due to its deep purple, almost black pigmentation that arises from dense contents of phenolic phytochemicals, especially anthocyanins. Total anthocyanin content in chokeberries is 1480 mg per 100 g of fresh berries, and proanthocyanidin concentration is 664 mg per 100 g (Wu et al. 2004, 2006). Both values are among the highest measured in plants to date.
The plant produces these pigments mainly in the skin of the berries to protect the pulp and seeds from constant exposure to ultraviolet radiation. By absorbing UV rays in the blue-purple spectrum, pigments filter intense sunlight and thereby have a role assuring regeneration of the species. Brightly colorful pigmentation also attracts birds and animals to consume the fruit and disperse the seeds in their droppings.
Anthocyanins not only contribute toward chokeberry’s astringent property (that would deter pests and infections) but also give Aronia melanocarpa extraordinary antioxidant strength that combats oxidative stress in the fruit during photosynthesis.
A test tube measurement of antioxidant strength, the oxygen radical absorbance capacity or ORAC, demonstrates chokeberry with one of the highest values yet recorded — 16,062 micromoles of Trolox Eq. per 100 g.
There is growing appreciation for consumers to increase their intake of antioxidant-rich plant foods from colorful sources like berries, tree or citrus fruits, vegetables, grains, and spices. Accordingly, a deep blue food source such as chokeberry yields anthocyanins in high concentrations per serving, indicating potential value as a functional food or nutraceutical.
Analysis of anthocyanins in chokeberries has identified the following individual chemicals (among hundreds known to exist in
the plant kingdom): cyanidin-3-galactoside, epicatechin, caffeic acid, quercetin, delphinidin, petunidin, pelargonidin, peonidin and malvidin. All these are members of the flavonoid category of antioxidant phenolics.
For reference to phenolics, flavonoids, anthocyanins and similar plant-derived antioxidants, Wikipedia has a list of phytochemicals and foods in which they are prominent.
Chokeberries’ rich antioxidant content may be beneficial as a dietary preventative for reducing the risk of diseases caused by oxidative stress. Among the models under evaluation where preliminary results show benefits of chokeberry anthocyanins are colorectal cancer (Lala et al. 2006), cardiovascular disease (Bell & Gochenaur 2006), chronic inflammation (Han et al. 2005), gastric mucosal disorders (peptic ulcer) (Valcheva-Kuzmanova et al. 2005), eye inflammation (uveitis) (Ohgami et al. 2005) and liver failure (Valcheva-Kuzmanova et al. 2004).
Disclaimer : The information presented herein is intended for educational purposes only. Individual results may vary, and before using any supplements, it is always advisable to consult with your own health care provider |
Take a look at the similar writing assignments
What did Jane Addams establish?
Jane Addams was the second woman to receive the Peace Prize. She founded the Women's International League for Peace and Freedom in 1919, and worked for many years to get the great powers to disarm and conclude peace agreements.
What was the impact of Jane Addams on the development of the women's movement?
Jane Addams was 29 when she and two friends opened Jane Addams was 29 when she and two friends opened Hull House on Chicago's tough west side in 1889. She co-founded the first national women's labor union and two major civil rights groups. She also lobbied for an eight-hour workday and an end to child labor.
Who is Jane Addams in social work?
The life and work of Jane Addams (1860-1935), founder of Hull House and Nobel Peace Prize winner, demonstrated the ethics and values that became the basis of the 100-year-old social work profession.
Where is Jane Addams from?
Cedarville, Illinois, United States
Who was the first female sociologist?
What did Karl Marx do for sociology?
Marx developed a theory that society progressed through a class conflict between the proletariat, the workers, and the bourgeoisie, the business owners and government leaders. Marx's theories about society not only helped form the discipline of sociology but also several perspectives within sociology.
How did Karl Marx view society?
Karl Marx asserted that all elements of a society's structure depend on its economic structure. Additionally, Marx saw conflict in society as the primary means of change. Economically, he saw conflict existing between the owners of the means of production—the bourgeoisie—and the laborers, called the proletariat.
Why is the Marxist theory important?
Marxism is a social, political, and economic theory originated by Karl Marx, which focuses on the struggle between capitalists and the working class. ... He believed that this conflict would ultimately lead to a revolution in which the working class would overthrow the capitalist class and seize control of the economy.
What are the main ideas of Karl Marx's theory?
He believed that no economic class—wage workers, land owners, etc. should have power over another. Marx believed that everyone should contribute what they can, and everyone should get what they need. His most famous book was the Communist Manifesto.
What was Karl Marx theory of socialism?
Socialism is a post-commodity economic system and production is carried out to directly produce use-value rather than toward generating profit. ... In this work, Marx's thinking is explored regarding production, consumption, distribution, social impact of capitalism.
What is Karl Marx theory?
Like the other classical economists, Karl Marx believed in the labor theory of value to explain relative differences in market prices. This theory stated that the value of a produced economic good can be measured objectively by the average number of labor-hours required to produce it.
What did Karl Marx think of the bourgeoisie?
In Marxist philosophy, the bourgeoisie is the social class that came to own the means of production during modern industrialization and whose societal concerns are the value of property and the preservation of capital to ensure the perpetuation of their economic supremacy in society.
What did Karl Marx say about the proletariat?
In the theory of Karl Marx, the term proletariat designated the class of wage workers who were engaged in industrial production and whose chief source of income was derived from the sale of their labour power.
Does the bourgeoisie still exist?
The terms bourgeois, petite (or “petty”) bourgeois and proletarian are today rarely employed in serious economic or social analysis. They are still sometimes used in left-wing circles, usually imprecisely, with primarily cultural connotations and often in a derogatory way.
Who was Karl Marx referring to when he wrote about the bourgeoisie?
The essence of Marxism is a power struggle between two classes: the bourgeoisie, which we already defined, and the proletariat, or working class. Marx found the bourgeoisie to be at fault for the problems faced by the proletariat.
Is bourgeois rich?
Bourgeoisie is often used insultingly. In between the very poor and the super rich is the bourgeoisie. People have traditionally viewed the bourgeoisie as kind of crass and pretentious.
Why is proletariat important?
Proletarians perform most of the work in capitalist economies, but they have little or no control over their work-lives or over the wealth that they produce. ... However, it was less its size than its structural and strategic location that made the proletariat important for Marx.
What was Karl Marx influenced by?
What caused Marxism?
Marxism is a philosophy created in response the Industrial Revolution, with Capitalist becoming wealthier and the working class barely making enough salary to live off of, Karl Marx and Friedrich Engels analysed this "class struggle" in The Communist Manifesto.
What event was influenced by Marx's philosophy?
Prometheus is the noblest saint and martyr in the calendar of philosophy. In 1841 Marx, together with other Young Hegelians, was much influenced by the publication of Das Wesen des Christentums (1841; The Essence of Christianity) by Ludwig Feuerbach.
How did Feuerbach influence Marx?
In the first part of his book, which strongly influenced Marx, Feuerbach analyzed the “true or anthropological essence of religion.” Discussing God's aspects “as a being of the understanding,” “as a moral being or law,” “as love,” and others, he argued that they correspond to different needs in human nature.
- What was Jane Addams fighting for?
- Why did Jane Addams win the Nobel Peace Prize?
- What is the title of the book that is published of Jane Addams?
- What was the purpose of Jane Addams Hull House?
- Which Expressway is the Kennedy?
- What was Jane Addams biggest accomplishment?
- Who were Jane Addams parents?
- Was Jane Addams married?
- What did Jane Addams write?
- What did Jane Addams believe in?
You will be interested
- What was Jane Addams Legacy?
- What is the purpose of the Hull House?
- Can I drive through Illinois tolls and pay later?
- How is Jane Addams remembered?
- What did Jane Addams do for the Progressive Era?
- What was the impact of Florence Kelley?
- What was the Hull House Apush?
- When did Jane Addams live?
- How did women's lives change during the Progressive Era?
- Did Jane Addams get married? |
Encapsulation in Java is the inclusion of all methods and variables needed for a Java object to function, contained within the object itself. Encapsulation, along with abstraction, polymorphism and inheritance, is one of the four key concepts in object oriented programming (OOP). Encapsulation is similar across object-oriented languages.
In OOP, objects are the first things a programmer considers when designing a program. They are also the units of code that are eventually derived from the process and what actually runs in the computer system. Each object is an instance of a particular class or subclass with the class's own methods and variables.
Java offers four different "scope" realms--public, protected, private, and package--that can be used to selectively hide data constructs. To achieve encapsulation, the programmer declares the class variables as “private” and then provides what are called public “setter and getter” methods which make it possible to view and modify the variables. A Java object publishes its interfaces, which consist of public methods and instantiated data, enabling other objects to interact with it without the object’s inner workings being revealed. Data hiding ensures that someone maintaining the code can’t inadvertently point to or access the wrong data. Programmers creating objects to interact with existing objects need not know how the encapsulated code works specifically, just how to use its interface.
Java object encapsulation enables the reuse of code that has already been tested. The inherent modularity of objects means that their source code can be written and maintained independently from the source code for other objects and makes them portable within a system. Furthermore, if there is a problem with a given object, it can be removed and replaced without affecting the rest of the program.
See a tutorial on encapsulation in Java: |
*Perform an academic search and locate articles from peer-reviewed journals that discuss a health disparity found in the Asian American and Native American communities. Discuss your findings and the impact APNs can make to eliminate these disparities.
Make 2 post one for Asian Americans and the other for Native American communities. Both responses should be a minimum of 200 words each one , scholarly written, APA formatted, and referenced. A minimum of 2 references are required (other than your text).
According to the 2010 US census, the Asian American/Pacific Islander (AAPI) community is made up of about 18.2 million people or 3.6% of the population of the U. S. there are five major Asian populations in the US: Chinese, Korean, Japanese, Filipino and Southeast Asian. The AAPIs have origins in at least 29 Asian countries and 20 Pacific Islander countries. There are as many languages, cultures and religions found in this population. More than 100 different languages spoken and just as many cultures and religions found in this population. The largest group of Asians is Chinese followed by Filipinos. Many Asians came to the U. S. seeking both a better life and employment.
Although Asian groups are very diverse in terms of culture, language, etiquette and rules for interaction, a common thread of Confucian, Buddhist and Taoist thought links their health care beliefs and practices and are derived from Chinese tradition. When planning for or providing care to the Pacific Islanders, the APN should utilize a Chinese frame of reference because less is known about this group.
The term Native American refers to the indigenous people of North, South, and Central American and includes American Indians and Alaska Natives (AI/AN) (Kosoko-Laski et al, 2009). About 5.2 million people identify as being American Indian or Alaska natives. This American Indian population is confined to 26 states in the US with most in the western part of the country. The largest AI populations by tribes are Cherokee, Navajo, Choctaw, Mexican American Indian, Chippewa, Sioux, Apache, Blackfeet, Cree, and Iroquois. The largest AN population are Yup’ik, Inupiat, Tlingit-Haida, Alaskan Athabascan, and Aleut and Tsimshian.
This population is highly diverse with 573 federally recognized tribes and several others not federally recognized. Federally recognized tribes are provided health and education assistance from the Indian Health Service, US Department of Health and Human Services. Depending on their geographical location, cultural practices, and language, life situations differ considerably.
Learning objectives for the module:
At the end of this module, the student will be able to:
- Discuss health and illness behaviors of Asians/Pacific Islanders
- Identify current healthcare problems of Asians/Pacific Islanders
- Describe cultural barriers to health care for the Native American
- Discuss health disparities of the Native American population
- Andrews.& Boyle, J. Chapters 7, 10
- Out of the shadows: Asian Americans, Native Hawaiians, and Pacific Islanders: https://wilkes.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=ccm&AN=105191181&scope=site
- A Nationwide Population Based Study Identifying Health Disparities between AI/AN and the General Population: https://wilkes.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=ccm&AN=106340971&scope=site |
Do you struggle with teaching the various strategies for adding and subtracting decimals?
Using visual models in addition to just teaching the standard algorithm can be a bit tricky, but I’m going to show you a few quick tips to make this easier.
Check out my tutorial videos that will take you step-by-step through a sample problem for both adding and subtracting decimals using the standard algorithm and a visual model. Keep reading past the videos to see my process and find out how you can share this information with the parents in your class.
Materials Used in the Videos
- dry erase pocket* – get a class set
- Expo dry erase markers (thin)*
- blank adding and subtracting decimals sample page
*Amazon Affiliate Links
Do You Want the Template From the Video?
Are you getting ready to introduce this skill to your students?
Get the parents in your class prepared (especially if they will need to help their child at home during remote learning) by sending them a quick parent guide. This will provide them with links to the two tutorial videos so they can be in the loop. It’s a single-page PDF, so you can send it in an e-mail, newsletter, or post it on any site you use to send information to students and parents.
How I Teach Adding and Subtracting Decimals
When I introduce adding and subtracting decimals, I teach the visual model in conjunction with the standard algorithm. My students see what the decimal looks like when it’s represented in a hundredths grid as part of a whole.
- I begin with guided notes. They have step-by-step instructions and graphics on how to create a visual model as well as the standard algorithm.
- We go through an example problem together. I have a student demonstrate by annotating over the notes page on the Promethean Board while the others complete it in their composition book.
- Students then work independently creating visual models for adding and subtracting decimals. Then, they solve each of the problems using the standard algorithm. Finally, they compare their answers from both strategies to make sure they match.
- After they’ve started to show mastery over a few days, I have them work with a partner using our Task Tents™ review activity.
- I assess their understanding with a quiz.
I have a resource that gives you everything you need. You get an introduction, guided practice, independent practice, Task Tents™ for review, and an assessment.
Teaching remotely? *Digital Version*
This skill can be taught and assessed during remote learning. Find out how you can teach adding and subtracting digitally.
Looking for More?
Check out some of these resources that use various strategies, including visual models, to teach common math skills:
- adding and subtracting decimals (digital and printable options)
- multiplying decimals (digital and printable options)
- dividing decimals (digital and printable options)
- adding and subtracting fractions (printable option – digital coming soon)
- multiplying fractions (printable option – digital coming soon)
- dividing fractions (printable option – digital coming soon)
- order of operations with grouping symbols (digital and printable options) |
Escape rooms and breakout classrooms have found their way into today’s elementary classroom! They are such a great way to increase student engagement and to promote collaborative learning. This multiplication escape activity is a perfect way to culminate your fourth grade multiplication unit.
Here’s what’s included:
-Clue 1-Students match the multiplicative comparison statements. They will use a ruler to draw a
line from the dots on the left to the corresponding dots on the right. The letters in the center of the
page that are NOT crossed out give the code for the first lock.
-Clue 2-Students find the factors of the four numbers on their recording sheet. They should look
closely at the factors and find the four factors that each number has in common.
That number combination will unlock the next clue.
-Clue 3-Students solve each of the multiplication problems. As they multiply, students should find
the product on the bottom of the recording sheet. Students will use the hieroglyphic decoder
to determine the mystery message.
-Clue 4-Students will complete the sieve of eratosthenes and use that information to find the next clue.
This can be used with Breakout Edu boxes or with your own boxes and locks. If you do not have boxes and locks, you may use the digital version, where students submit their answers on Google Forms. |
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Static Electricity, Charge, and the Conservation of Charge
A quantum number that determines the electromagnetic interactions of some subatomic particles; by convention, the electron has an electric charge of -1 and the proton +1, and quarks have fractional charge.
Electric charge is a physical property of matter. It is created by an imbalance in a substance's number of protons and electrons. The matter is positively charged if it contains more protons than electrons, and it is negatively charged if it contains more electrons than protons. In both instances, charged particles will experience a force when in the presence of other charged matter.
Charges of like sign (positive and positive, or negative and negative) will repel each other, whereas charges of opposite sign (positive and negative) will attract each another, as shown in .
Charge, like matter, is essentially constant throughout the universe and over time. In physics, charge conservation is the principle that electric charge can neither be created nor destroyed. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved.
For any finite volume, the law of conservation of charge (Q) can be written as a continuity equation:
where Q(t1) is the charge in the system at a given time, Q(t2) is the charge in the same system at a later time, Qin is the charge that has entered the system between the two times, and Qout is the amount of charge that has left the system between the two times.
This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons, which can be created and destroyed. For example, when particles are destroyed, equal numbers of positive and negative charges are destroyed, keeping the net amount of charge unchanged.
Static electricity is when an excess of electric charge collects on an object's surface. It can be created through contact between materials, a buildup of pressure or heat, or the presence of a charge. Static electricity can also be created through friction between a balloon (or another object) and human hair (see ). It can be observed in storm clouds as a result of pressure buildup; lightning (see ) is the discharge that occurs after the charge exceeds a critical concentration.
can not be created or destroyed, can be created or destroyed at a ratio of 2:1 between positive and negative charges, can be created or destroyed at a ratio of 1:2 between positive and negative charges, and can be created or destroyed at a ratio of 1:1 between positive and negative charges |
Popcorn reading, bump, and round robin reading do not make for great middle school classroom! Often I am asked what types of reading should occur in a middle school English classroom? Three types or reading should be part of every middle school Language Arts classrooms
Instructional Interactive Read Aloud
Reading can and should be taught. An interactive read aloud allows the teacher to model in a think aloud how they apply a reading strategy. This modeling during a read aloud builds and/or enlarges students’ mental model of how a strategy works. For this aspect of instruction, I suggest that the teacher models with a short text that matches the genre and/or theme that ties a reading unit together. Short texts can include a picture book, an excerpt from a longer text, a folk or fairy tale, myth or legend, a short, short story, or an article from a magazine or newsletter.
Here are some skills and strategies that you can model in interactive read aloud lessons:
- Making inferences
- Identifying big ideas and themes
- Identifying central ideas and themes
- Locating important details
- Skimming to find details
- Author’s purposes
- Purposes of informational texts (nonfiction) and literature (fiction)
- Literary Elements and how each supports comprehension: setting, protagonist, antagonists, plot, conflicts, other characters, climax, denouement
- Informational text structures and how these support comprehension: description, compare/contrast, cause/effect, problem/solutions, sequence, question/answer
- Word choice as a guide to pinpointing mood or tone
- Vocabulary building with an emphasis on general academic vocabulary, figurative language, and comprehension, using roots, prefixes, suffices, discussing concepts, diverse word meanings, and different forms of a word.
Instructional reading should happen during class. Students need to read materials at their instructional reading level—about 95% reading accuracy and about 85 % comprehension. Organizing instructional reading around a genre and theme—for example biography with a theme of obstacles—permits students to read different texts and discuss their reading around the genre and theme.
As an example, the class opens with an interactive read aloud lesson that lasts about ten minutes and occurs daily. Next, a transition to instructional reading. Find books for students in your school library, your community public library, and in your class library and school’s book room (if you have one). Instructional reading books stay in the classroom, as students from different sections may be using the same materials each day.
A teacher can have students chunk instructional texts by putting a sticky note at the end of every two to three chapters. When students reach a sticky note, they stop to discuss their books with a partner and then a group of four. During this stop-to-think time, students can write about their books, connect the theme to the book, and apply strategies and skills the teacher has modeled during interactive read-aloud lessons.
Partners should be no more than one year apart in reading levels so they have something to contribute to each other. Students reading far below grade level learn with the teacher.
Students should always have a book they are reading independently. By encouraging them to read accessible books on topics they love and want to know more about, you develop their motivation to read!
Have students keep a Book Log of the titles they’ve read and reread. Do not ask students to do a project for each completed book, for that will turn them away from reading. A book talk a month and a written book review twice a year on independent reading is enough. Reflecting on independent reading is important; getting hung up on how you will hold students accountable is not very valuable. Remember, enthusiastic readers of any age do not summarize every chapter they read in a journal.
Students should complete thirty minutes of independent reading a night, and that should be their main homework assignment. Try to set aside two days a week for students to complete independent reading at school. Reading in a classroom is valuable!
Including the three layers of reading into a middle school curriculum brings balance, engagement, and motivation to the curriculum and holds the potential of improving reading for all students. When the teacher models how she/he applies a skill or strategy to a specific text, the teacher provides opportunities for all students to observe how a skill or strategy works. Instructional reading asks students to apply specific skills and strategies to texts that can improve students’ comprehension, vocabulary, and skill because these texts stretch students’ thinking with the teacher, the expert, as a supportive guide. Equally important is independent reading: easy, enjoyable texts that students self-select on topics, genres, or by authors that interest them—texts about two years below students’ instructional level.
Give this framework a try. The goal is to increase reading and help students learn how to become strategic readers.
- Posted by Evan Robb
- On December 19, 2018
- 0 Comments |
|Home||Chapter 9||Theory into Practice|
Ch. 9, p. 304
Applying the Principles of Mastery Learning
Theres a classic Rolling Stones song called Time Is on My Side. There probably couldnt be a less appropriate theme song for teachers. Yet because a significant element of mastery learning is the varying of time to meet individual needs, we cannot discuss the application of this approach without addressing realistic strategies for working within the time constraints of todays classrooms.
The basic assumption of mastery learning is that almost all students can learn the essential knowledge and skills within a curriculum when the learning is broken into its component parts and presented sequentially. To implement this approach effectively, teachers must meet several challenges.
The first challenge is to divide the content and/or skills into small units that you can present sequentially using sound teaching strategies. Then you will need to assess your students. The data you obtain will help you determine where in the sequence of the curriculum your instruction should begin. Quality assessment will allow you to link your instructional activities to individual student needs.
While you are involved in actual instructional activities, another challenge you will face is how to address the variations in student learning. For students who quickly grasp concepts, you will need to promote learning by developing relevant enrichment opportunities. This extension of basic concepts will allow these students to remain engaged in appropriate higher-level learning activities while simultaneously allowing you to extend the learning opportunities of the students who need more time to master the basics.
To increase the effectiveness of the instructional process and subsequent student learning, you should engage in ongoing formative evaluations: frequent assessments of student learning that will enable you to adjust your instruction to meet the individual needs of your students. You will then need to prepare summative evaluations, or final evaluations on each objective. These are likely to reveal that some learners still have not reached a mastery level of the basic knowledge/skills within the time frame you have provided. You will need to develop creative ways for reteaching, presenting alternative learning opportunities and/or extending practice. Strategies such as after-school corrective instruction, peer or cross-age tutoring, or use of paraprofessionals can help students achieve mastery of the essentials.
Because a mastery learning approach can be labor and time intensive, you will want to be selective in its application. Identifying the key aspects of the curriculum to which mastery learning is most relevant and limiting the use of this approach to situations where prerequisite knowledge/skills are essential for future learning will enhance your ability to apply mastery learning principles effectively. You and your students will feel youve made a wise investment of time and energy when the payoff is increased achievement for all. |
When the ancestors of modern whales first dipped into the seas, it was the first step of a radical evolutionary journey: Arms morphed into flippers, and legs eventually vanished. But limbs weren't the only thing that needed to change. New whale fossils are revealing how the animal's ears were transformed in a process of compromise: Early whales had ears adapted for hearing both on land and underwater.
Land mammals perceive sound that enters the ear canal and hits the eardrum. The eardrum is connected to three small bones--ossicles called the malleus, incus, and stapes--that amplify and transmit the vibrations of the eardrum to the fluid-filled cochlea, where tiny hairs turn the sloshing into nerve signals.
When whales moved from land to water, they had to make two major changes to ensure that they kept hearing: They enlarged and rearranged the ossicles to transmit underwater sounds. And, in order to retain directional hearing, they had to prevent sounds from passing through the whole skull to the ear. (On land, airborne sounds bounce off the skull.) Whales accomplished this by isolating the cochlea in an air chamber. Instead of hearing through ear canals or the skull, whales channel sound to the ear via a fat pad in the lower jaw.
The earliest known whales, called pakicetids, lived 50 million years ago on land and had ears to match. By 35 million to 40 million years ago, the basilosauroid whales had ears that were essentially like those of modern toothed whales. To find out what happened in between, Sirpa Nummela and Hans Thewissen of the Northeastern Ohio Universities College of Medicine in Rootstown studied fossils that belong to two intermediate families.
Remingtonocetus and Indocetus--which had four limbs and probably were as aquatic as sea lions--both had heavy ossicles like modern whales do, and their jaws had space for a fat pad. But their cochleae were not completely surrounded by air, so their directional hearing would have been quite limited. These early whales could still detect sounds in the air via their ear canals, the team concludes in this week's issue of Nature. "This seems to be an early stage of experimentation," Thewissen says. The whales could hear underwater better than other land mammals did, but at the cost of not hearing as well on land.
"This is really documenting a very important evolutionary transition," says Zhe-Xi Luo of the Carnegie Museum of Natural History in Pittsburgh. Although the ear of these transitional forms is what he expected, what's "amazing" is that Thewissen's team found the ossicles, which are rarely preserved. |
Hummingbird Information ... Species (Listing as well as by Location)
Useful / Interesting Links:
- EXTREME Metabolism and Survival and Flight Adaptions - Amazing Facts
- Feeding and Attracting Hummingbirds To Your Garden
Diet / Feeding
Nectar (a sweet liquid inside flowers) is the first thing coming to mind when considering what a hummingbird eats. Adult hummingbirds require copious amounts of energy, and so they crave the sugar in the nectar found in many tubular flowers.
Like bees, they are able to assess the amount of sugar in the nectar they eat; they reject flower types that produce nectar that is less than 10% sugar and prefer those whose sugar content is stronger.
As hummingbirds collect this nectar, their grooved tongues dart in and out of the flower up to 13 times per SECOND through bills that are usually long and straight or with a down-curve. The shape of the bill is adapted to the shape of the mostly tubular flowers found within their range.
Hummingbirds need to feed 5 to 8 times every hour, but each feeding lasts a minute or less (~30 to 60 seconds), and they must visit hundreds of flowers daily. Their extremely high metabolism requires them to eat enormous amounts of food. In fact, they eat up to 10 times their body weight in food every day - even more in preparation for migration when they usually double their weight to be able to meet the challenging demands associated with the pending migration that may require them to travel over two thousand miles.
Benefits for Native Plants. In the process of feeding, flowers benefit from cross-pollination as the hummingbird’s head becomes covered with pollen and spreads from flower to flower. As they move to the next flower, the pollen is deposited on the next flower, which is then able to produce seeds and fruit. Some native plants rely on hummingbirds for pollination and would not be able to exist without the "services" inadvertently rendered by the hummingbirds.
Other Foods: Hummingbirds cannot live on nectar alone but require other nutrients, especially protein, but also amino acids, vitamins, minerals, etc, obtained through consuming copious amounts of small insects and spiders, much of which they catch in flight. The percentage of insects in their diets increases drastically during the breeding season - particularly when feeding young, when their requirement for protein increases drastically.
They will also take advantage of hummingbird feeders and many people put hummingbird feeders up to allow them to attract hummingbirds and observe them up close. The below link will provide information about what feeders work best and what hummingbird food recipe to use. Most commercially available products contain chemicals and other ingredients that may be harmful to hummingbirds - and are expensive to boot.
- Feeding Hummingbirds and other Nectar-feeding Birds the Right Way - Recipes and Instructions
Only about 10 to 15 % of a hummingbird's time is spent eating; most of the rest is spent perching, self-preening and sunbathing
Hummingbirds also require water for drinking and bathing. They are particularly attracted to "moving water," such as water fountains or bird baths with water wigglers.
Species Research by Sibylle Johnson
Please Note: The articles or images on this page are the sole property of the authors or photographers. Please contact them directly with respect to any copyright or licensing questions. Thank you.
BeautyOfBirds strives to maintain accurate and up-to-date information; however, mistakes do happen. If you would like to correct or update any of the information, please send us an e-mail. THANK YOU! |
German Victims and American Oppressors: The Cultural Background and Legacy of Meyer v. Nebraska
In a post-holocaust age it seems odd to talk about Germans as victims. Yet, during and after World War I German Americans were the targets of official harassment, political suppression, police brutality, and mob violence. These actions led to a number of cases that helped shape the meaning of the Fourteenth Amendment and the Bill of Rights. In this early history Meyer v. Nebraska1 looms large for two reasons. First, it was the immediate prelude to the beginning of the incorporation of the Bill of Rights through the Fourteenth Amendment. 2 Second, in an age when civil liberties were under assault from all sides, Meyer stands out as the most significant civil liberties victory of the World War I period. To understand this case, we must first examine the repression of German Americans and other immigrants in the United States from the late nineteenth century until the mid-1920s.
The repression was rooted in the hostility to foreigners that reemerged in the late nineteenth century as nativist organizations, such as the American Protective Association, began agitating for an end to non-Anglo-Saxon immigration. 3 Part of this hostility was tied to the rise of the prohibitionist |
You are here
William Colgan, Horst Machguth, Mike MacFerrin, Jeff D. Colgan, Dirk van As, Joseph A. MacGregor
As the Cold War began and the threat of nuclear attack became ever more real, the United States and the Kingdom of Denmark agreed that Greenland would host three American airbases to counter the nuclear threat from the Soviet Union. Along with these airbases came a plan to create a ballistic missile base beneath Greenland’s ice sheets. Powered by a portable nuclear generator Camp Century was built to host up to 200 soldiers, provide year round accommodation, and upon expansion would be capable of storing up to 600 ballistic missiles. The plans for the base were short lived. Built in 1959 Camp Century was abandoned in 1967 after 8 short years. When the Army Corps of Engineers (ACE) abandoned the base little was done to dispose of waste materials. The ACE believed that accumulating snowfall and frigid temperatures would preserve the base and the waste left along with it. Upon its abandonment only the reaction chamber of the nuclear generator was taken. 9,200 tons of physical waste (building infrastructure), 200,000 liters of diesel fuel, 24,000,000 liters of biological waste, and 1,200,000,000 Bq (unit of radioactivity) of radioactive material were left at Camp Century. Aside from diesel fuel that was stored in rigid containers, which have most likely been compromised, liquid waste was stored in unlined sumps. Experts believe that the continued degradation of ice sheets will create conditions where this liquid waste will be able to permeate deeper into the ice, possibly into aquifers within the ice sheet, and even the sea.
If the waste left at Camp Century were to permeate deeper into the ice shelf it could have grave environmental consequences. Not only would it contaminate a large swath of centuries old ice that holds a plethora of scientific data, it would also pose the risk of making its way out to sea and contaminating a diverse ecosystem. As the United States, Canada, and Denmark look to exploit the resources around Greenland, estimated to be worth tens if not hundreds of billions of dollars, any pollution coming from Camp Century could pose a risk to development. Camp Century is also a key example as to what we can expect to see as climate change causes ice sheets to shrink and ocean levels to rise. Rising ocean levels will engulf abandoned and derelict factories, refineries, waste dumps, and other industrial infrastructure close to the sea. The remobilization of waste at sites such as Camp Century is an occurrence that will become more widespread as a result of climate change. Policymakers will have to work at both a national and international level to combat climate change and to shore up sites that pose a risk to the environment.
Find Similar Environment Research |
Tooth Decay (Caries or Cavities)
What is tooth decay (caries or cavities)?
Tooth decay is the disease known as caries or cavities. Tooth decay is caused by certain bacteria in the mouth that thrive on sugars and refined carbohydrates and produce acids as a side effect. The acids attach to the hard outer layer of your tooth (enamel) first. The acids eventually penetrate into the tooth to the softer mineral within the tooth (dentin). If not treated, the tooth decay can destroy large portions of the tooth and infect the nerve (pulp) at the center of the tooth. In older adults, exposed root surfaces are also at risk for decay. Tooth decay is a highly preventable disease with many contributing factors.
Who is at risk for tooth decay?
Everyone who has teeth is at risk for tooth decay. We all host bacteria in our mouths which makes everyone a potential target for cavities. Risk factors that increase the risk for tooth decay include:
A diet high in sweets, refined carbohydrates, and sugars
Living in communities with limited or no fluoridated water supplies
Poor oral hygiene
Reduced salivary flow
Being a child
Being an older adult
Preventing tooth decay
Preventing tooth decay and cavities involves 6 simple steps:
Brush your teeth and tongue twice a day, for at least 2 minutes at a time with a fluoridated toothpaste.
Floss your teeth daily.
Eat a well-balanced diet and limit or eliminate sugary snacks.
Consult your healthcare provider or dentist about supplemental use of fluoride or dental sealants to protect family members' teeth through the age of 16.
Ask about fluoride varnish, which can be applied to teeth every 3 to 6 months.
Schedule routine dental cleanings and exams every 6 months for yourself and your family. |
Credit: Francesco Mereghetti, background image: NASA, ESA and T.M. Brown (STScI)
As you probably know, stars like the sun produce heat and light by burning hydrogen to helium in their cores. When all the hydrogen is used up, the core contracts and, depending on how massive the star is, the core may become hot enough to burn helium to carbon. But many stars are not massive enough to burn carbon; if that's the case then the core shrinks to a certain minimum size and it eventually becomes a "white dwarf" star after the outer layers of the star are blown away (in a process still not fully understood). The core (about the size of the earth) doesn't shrink any further because it's held up by a quantum mechanical effect called electron degeneracy pressure. As shown by Subrahmanyan Chandrasekhar, there's a limiting mass that can be supported by the pressure of degenerate electrons. This Chandrasekhar limit is about 1.4 times the mass of the Sun. White dwarfs that grow larger than this limit catastrophically collapse, producing an enormous supernova explosion which can be seen to the edge of the Universe. These supernovae all have about the same intrinsic brightness and serve as standard candles which illuminate distance and time nearly back to the Big Bang. Now astronomers have identified one such system in our own Milky Way. This system, called HD49798, consists of an evolved, stripped down star with a white dwarf companion. An artist's impression of the system is shown above, showing the white dwarf (and its accretion disk) in the foreground. XMM-Newton observations of the system have measured X-ray pulsations from the white dwarf, and have measured an eclipse of the white dwarf going behind the companion star. These data allow astronomers to measure the mass of the white dwarf, and it turns out that the white dwarf weighs in at a whopping 1.3 solar masses, one of the most massive white dwarfs ever weighed. It only needs to grab a little more matter from its companion to push it over the Chandrasekhar limit - when that happens the explosion will be spectacular and probably bright enough to be seen in the daytime.
Published: September 14, 2009
HEA Dictionary * Archive
* Search HEAPOW
* Other Languages
* HEAPOW on Facebook
* Download all Images
* Education * HEAD
Each week the HEASARC
brings you new, exciting and beautiful images from X-ray and Gamma ray
astronomy. Check back each week and be sure to check out the HEAPOW archive!
Page Author: Dr. Michael F. Corcoran
Last modified Sunday, 20-Sep-2009 07:38:32 EDT |
NASA's Kepler mission scientists have discovered a new planetary system that is home to the smallest planet yet found around a star similar to the Sun.
The planets are located in a system called Kepler-37, about 210 light-years from Earth in the constellation Lyra, the U.S. space agency said in a press release.
The smallest planet, Kepler-37b, is slightly larger than Moon, measuring about one-third of the size of Earth. It is smaller than Mercury, which made its detection a challenge. The Moon-size planet and its two companion planets were discovered by scientists with NASA's Kepler mission, which is assigned to find Earth-sized planets in or near the "habitable zone," the region in a planetary system where liquid water might exist on the surface of an orbiting planet.
"While the star in Kepler-37 may be similar to the Sun, the system appears quite unlike the solar system in which we live. Astronomers think Kepler-37b does not have an atmosphere and cannot support life as we know it. The tiny planet almost certainly is rocky in composition," the release said.
Kepler-37c, the closer neighboring planet, is slightly smaller than Venus, measuring almost three-quarters the size of Earth. Kepler-37d, the farther planet, is twice the size of Earth.
The first exoplanets found to orbit a normal star were giants. As technologies have advanced, smaller and smaller planets have been found, and Kepler has shown even Earth-size exoplanets are common. "Even Kepler can only detect such a tiny world around the brightest stars it observes," said Jack Lissauer, a planetary scientist at NASA's Ames Research Center in Moffett Field, California. "The fact we've discovered tiny Kepler-37b suggests such little planets are common, and more planetary wonders await as we continue to gather and analyze additional data," according to him.
Kepler-37's host star belongs to the same class as the Sun, although it is slightly cooler and smaller. All three planets orbit the star at less than the distance Mercury is to the Sun, suggesting they are very hot, inhospitable worlds.
A "year" on these planets is very short. Kepler-37b orbits its host star every 13 days at less than one-third the distance Mercury is to the Sun. The other two planets, Kepler-37c and Kepler-37d, orbit their star every 21 and 40 days.
"We uncovered a planet smaller than any in our solar system orbiting one of the few stars that is both bright and quiet, where signal detection was possible," said Thomas Barclay, Kepler scientist at the Bay Area Environmental Research Institute in Sonoma, California, and lead author of the new study published in the journal Nature.
The research team used data from NASA's Kepler space telescope, which simultaneously and continuously measures the brightness of more than 150,000 stars every 30 minutes.
Kepler is NASA's tenth Discovery Mission and was funded by NASA's Science Mission Directorate at the agency's headquarters in Washington.
by RTT Staff Writer
For comments and feedback: [email protected] |
A NASA moon probe equipped with plastic that mimics living tissue is helping researchers learn how deep-space radiation may affect astronauts and electronics on future missions, researchers say.
These findings could lead to the development of leaner, more efficient spacecraft that are better at balancing radiation protection against weight, scientists added.
Potentially dangerous radiation pervades outer space, such as electrically charged particles from the sun and high-mass, high-energy cosmic rays known as HZE particles that emerge from deep space. Earth's atmosphere and magnetic field block about 99.9 percent of this radiation, protecting those of us on the planet's surface. [Stunning Photos of Solar Flares & Sun Storms]
"The atmosphere serves as just a big thick shield — the weight exerted by the atmosphere is equivalent to a column of mercury about 30 inches (76 centimeters) high, so you can think of the atmosphere as a huge slab of dense metal a yard thick,"study lead author Mark Looper, a space radiation physicist at The Aerospace Corporation in El Segundo, Calif., told SPACE.com. "The magnetic field, in addition, shunts aside most of the radiation from Earth's surface."
To find out more about radiation hazards in space, Looper and his colleagues are relying on the Cosmic Ray Telescope for the Effects of Radiation instrument (CRaTER) aboard NASA's Lunar Reconnaissance Orbiter, which has been zipping around the moon at an altitude of about 30 miles (50 kilometers) since 2009.
CRaTER aims to measure not only radiation near the moon, but also the effects radiation has on sensitive materials such as human tissue or electronic parts that might absorb it behind shielding. The instrument uses sensors behind blocks of plastic designed to mimic the muscle tissue over a person's radiation-sensitive bone marrow.
"We've never had such tissue-equivalent plastics as part of a complex sensor in space before," Looper said.
The researchers found that although HZE particles only make up 1 percent or so of the radiation the telescope saw, "they made up close to half of the energy deposited by radiation," Looper said. "You get much more energy deposited by these heavies."
By looking with precision at the range of energies deposited by various sources of radiation, scientists can estimate the effects they might have. "It's like the difference between being hit with a bat or a bullet — different kinds of radiation may deposit the same amount of energy, but they distribute it differently," Looper said.
Altogether, these findings could help researchers optimize just how much shielding spacecraft need without making them too heavy for missions.
"The name of the game is risk management," Looper said. "To decide how much shielding you need, you need to be able to measure the effects. The more precision with which you can measure those effects, the less likely you are to add more shielding than you need, which is expensive and makes spacecraft harder to launch."
CRaTER also revealed radiation emerging from the moon — showers of protons blasted off the moon's surface by cosmic rays from deep space.
"Detection of these protons is a first, and we can build up a map of the moon from them that may help tell us where hydrogen-bearing materials such as water are on the lunar surface," Looper said.
In the future, "we can learn more about what effects solar radiation might have," Looper said.
The scientists detailed their findings online April 3 in the journal Space Weather.
Related on SPACE.com and MNN: |
|Home||About Us||Contact Us||Subscribe||Archives||Calendar||Resources|
By Ethan Nedeau
EVERY AUTUMN, American eels (Anguilla rostrata) descend the Gulf of Maine's rivers on a journey toward the warm waters of the Sargasso Sea to spawn and die, returning to a birthplace they left as many as 30 years earlier. We do not know their path nor do we understand how they navigate vast distances. The moon, stars, magnetism and an exceptional homing ability may guide them through the dark waters.
After spawning is completed and eggs hatch, billions of tiny larvae become entrained in the Gulf Stream, Florida Current and Antilles Current. The larvae, called leptocephali, are transparent and shaped like a willow leaf. They can orient themselves and change positions in the water column, but ocean currents dictate their path toward the continent. Some may get pushed southwest into the Gulf of Mexico and wander into estuaries of the Gulf Coast or Central America. Others will be whisked northward by the strong Gulf Stream, spin off into the Labrador Current and turn south toward Newfoundland and the Gulf of St. Lawrence, or north to southern Greenland. Every eel must be prepared to go where currents take it to warm tropical rivers, icy northern waters or anywhere in between.
Leptocephali metamorphose into a more recognizably eel-like juvenile form called glass eels. They are so named because they are transparent and lack pigmentation. Glass eels are stronger swimmers and as currents take them close to the continental shelf and coastal waters, they will swim toward the coast and seek out estuaries. Once in estuaries, they become pigmented and are known as elvers; by this time they are 60 to 130 millimeters (2.5 to 5 inches) long. They swim up tidal rivers during flood tides and retreat to the bottom as tides ebb. They forge upstream with each tidal cycle and gain strength until they can swim against strong currents. Elvers grow into yellow eels, the fourth stage of life.
Over the next few years they may disperse within a watershed, often traveling hundreds of miles in stops and starts, depending on migration barriers and habitat along the way. Some will remain in estuaries and tidal freshwater habitats; these tend to be male fish and may only stay for a few years before returning to the sea. Others have wanderlust and may spend the next 30 years in a watershed - ascending rivers, crossing lakes and pushing toward headwaters. Eels that swim far inland - especially in the north - are mostly female and may grow to over four feet long, far larger than males. Eels toward the northern end of the range are the oldest and largest of their kind and females may produce two or three times more eggs than females from the mid-Atlantic region. Eels prey on or scavenge aquatic invertebrates, amphibians and fish. In turn, large predators such as bass, lake trout, fish-eating birds and mammals may eat them.
When the Sargasso finally calls them back, yellow eels turn to a blackish-bronze color, their eyes enlarge, they fatten and develop a thicker skin and their digestive tract degenerates. They are then silver eels - the fifth and last stage of their lives. Eels are negatively phototactic - they avoid light - and usually wait for the darkest nights to descend rivers. On nights when clouds shroud the moon and rivers swell from autumn rains, silver eels finally begin to swim toward the sea. Then they swim deep below the ocean's surface where light does not penetrate.
The life cycle of an American eel
Eels may be declining throughout their range, but most scientists agree that we need much better data. Since all eels come from a single breeding population and disperse somewhat randomly to coastal watersheds, declines in some areas are thought to reflect a range-wide decline. In a 1994 review paper published in the Canadian Journal of Fisheries and Aquatic Sciences, scientists reported an 81-fold decline in yellow eel recruitment to Lake Ontario over an eight-year span, based on the number of fish passing an eel ladder at the R.H. Saunders Hydroelectric Dam on the St. Lawrence River. Research published in Fisheries magazine in 2000, found that seven of 16 long-term datasets reviewed showed a significant decrease in yellow eels and silver eels, and the other datasets showed no trend. In the United States, commercial landings for eels declined from a high of 1.8 million pounds in 1985 to a low of 649,000 pounds in 2002, according to a study released in 2004 by the Atlantic States Marine Fisheries Commission (ASMFC).
It is difficult to demonstrate a significant decline in the eel population. Scientists can count silver eels leaving a river and estimate the number of elvers that return, but know almost nothing about the steps in between. Eel recruitment (the number of juvenile eels returning to estuaries and rivers) can be cyclic and erratic and scientists do not understand long-term population dynamics well enough to put recent declines into perspective. Part of the problem is that, until recently, recruitment was not being measured at all in some areas. Commercial landings provided some evidence of a decline, but these data depend on market conditions and fishing intensity and may not necessarily reflect a declining population. Eels were just not on the radar screen of most fisheries management agencies until very recently, so agencies are now scrambling to develop more consistent monitoring programs for eels. ASMFC is spearheading this effort.
In 2004, due to continued population declines, the ASMFC recommended that the U.S. Fish and Wildlife Service (USFWS) and National Marine Fisheries Service consider designating the Atlantic coastal stock of American eels a species of concern. This action would prompt a status review to determine whether the species should be considered a candidate for federal listing under the Endangered Species Act.
Eels may indeed be experiencing a drastic decline, as some evidence suggests. The perceived eel decline may be due to natural cycles or influenced by factors far beyond the scope of local or regional management agencies. For example, scientists have demonstrated a strong correlation between European eel (Anguilla anguilla) recruitment and the North Atlantic Oscillation, sea-surface temperature anomalies and position of the Gulf Stream. If these hemisphere-scale factors strongly influence American eel recruitment, then we may need more realistic expectations about what local protection and restoration can accomplish.
However, there are clear indications of how we affect eels and how we could reduce mortality, giving us some sense of control. Juvenile eels congregate at the foot of dams; we have severed their paths between the ocean and watersheds. Killed, maimed and stunned eels swirl in eddies below hydropower dams after unwittingly passing through the only downstream path we have provided, through the whirring blades of turbines. Adult eels from many watersheds have cancers, vertebral malformations and high levels of chemical contaminants accumulated during their long residence in polluted waters. Commercial weirs, nets and traps capture elvers trying to swim upstream, yellow eels while in freshwater, and silver eels trying to return to the ocean. In some cases, most or all of the eels that enter a river may never return to the Sargasso Sea to spawn.
There are ways to protect and restore eels in a given river, but unfortunately, it might make little difference to the population as a whole. Restoring eels in a particular river does not ensure that elvers will return in ensuing years. This is the challenge of trying to manage a species with an enormous geographic range that randomly breeds in an unknown location in the middle of the ocean. Unlike salmon that return to the rivers where they were hatched, young eels do not return to the same rivers where their parents lived. We may be able to restore eels in the Ipswich River in Massachusetts, but how much does this river contribute to the entire spawning stock of the species?
Eels need access to more habitat, and though rivers like the Merrimack, Penobscot and Saint John may provide much more habitat than smaller rivers, all rivers are important. A 1998 USFWS report estimated that dams reduced potential eel habitat by 91 percent from Connecticut to Maine. The American eel population, wide-ranging and previously thought to be robust against local losses, may not be replacing itself. Every river is important because the collective effort to restore rivers represents a culture of environmental concern and because every river restored does add up to more eel habitat.
One of the best things we can do for eels is to allow them unrestricted access to coastal watersheds and safe passage back to the sea. Thus, we need to focus on man-made migration barriers, especially dams and culverts. With over 60 rivers, hundreds of small streams and perhaps more than 7,000 dams in the Gulf of Maine watershed, it is difficult to decide where to focus limited resources. Four factors may be important in selecting priorities:
1. Focus on hydropower dams that are up for relicensing, especially near the coast.
Properly designed fishways can allow eel migration. However, salmon or shad fishways typically are designed for strong adult fish that can swim very fast and ascend small drops. These fishways do not work for small juvenile eels that are weak swimmers and cannot leap. Water velocities in excess of their swimming speed may block migration, but eels are good climbers and may ascend vertical surfaces provided there is a wet, rough substrate for them to climb on.
A common fishway design that is specifically intended for elvers is a ramp with baffles and a climbing material, such as artificial mesh or bottlebrushes. The ramp can be installed on the face of, or adjacent to, a dam. Some eel “ladders” actually trap the elvers in a container that volunteers carry above the dam and release. Very few elver ladders have been installed on Gulf of Maine's rivers, but this could be an effective way of restoring eels and involving citizens in restoration projects.
Culverts are an important impediment to juvenile eels because they concentrate flow, create high water velocities that may exceed the swimming speed of eels and usually have a smooth surface that does not provide a flow refuge. Some culverts are elevated at one or both ends, and drops of only a few centimeters may be enough to block juvenile eels. Problem culverts should be replaced with adequately sized culverts with natural bottom habitat and hydraulic conditions that do not restrict eel movement.
Providing safe passage for ocean-bound eels is a different challenge than getting juveniles upstream. Dams do not necessarily hinder downstream migration, especially low-head dams where water flows over the top or through open gates. The main problem is hydroelectric dams where water passes through turbines. The most effective way of getting eels past a hydroelectric dam is to turn off the turbines or increase spill over a dam. Some hydropower companies have license articles that require them to turn off turbines at night during peak migration. Another effective way to get eels downstream is to remove dams altogether. Dam removal reconnects fragmented river systems, restores habitat for migratory and resident fish, restores natural flow regimes and may improve water quality.
The American eel may lack the charisma of other species whose habitat spans the ocean, tidewaters and headwaters of the Gulf of Maine watershed. We embrace Atlantic salmon, shad and sturgeon, but the American eel - snake-like and active on the darkest nights - receives a more reluctant welcome. Yet eels are one of the most interesting and poorly understood fish species in our region. They will test our ability and resolve to reconnect fragmented ecosystems for the benefit of all species, and to think locally and globally to protect our natural resources.
Ethan Nedeau is a science translator for the Gulf of Maine Council. He can be reached at [email protected]. |
When it comes to space travel, the heat is on. During each of their Earth-orbiting missions, NASA's four space shuttles are subjected to temperatures ranging from minus 250 degrees Fahrenheit in space to nearly 3000 degrees during reentry into the Earth's atmosphere. Thus the ability of the orbiter to take the heat is critical to ensuring the safety of the crew and on-board equipment. Because of this, the surface of the orbiter, which is covered by 24,000 protective thermal tiles, is carefully scrutinized after each flight. Typically, the process of assessing the quality of each and every one of the tiles is a painstaking manual effort. Fortunately for the workers in the trenches, there is a laser light at the end of the tedious tunnel.
Engineers from NASA's Ames Research Center and the Boeing Company recently delivered a first-of-its-kind, portable 3D laser scanner to NASA's Kennedy Space Center. The scanner, which uses a digital camera and lasers in a measurement technique called laser triangulation, is the first component of what will eventually become an Electronic Inspection and Mapping System (EIMS). The EIMS objective will be to increase the accuracy and reliability of shuttle damage estimates, and in so doing reduce vehicle turn-around time.
|A flaw in one of the space shuttle's protective thermal tiles is detected using a new laser scanning device. Custom software generates a 3D image of the damage.|
When placed over a thermal tile, the new scanner detects flaws within a 3- by 3-inch area and transmits the collected data to a laptop computer. Custom software locates and characterizes the damage and generates a 3D image that indicates the size and depth of the flaw. The results of each scan are then stored in a database containing all of the fabrication and maintenance information for every tile, so that the latest damage history and maintenance information for each of NASA's four shuttles is readily available and easily accessible.
At the heart of the 3D digitizing technology is its laser triangulation technique in which laser diodes project a line of light onto a target object. A digital camera embedded in the scanner detects the light reflected off the object and records the position of the reflected beam to determine object height measurements. As the camera and lasers move over the object, the position of the reflection on the camera detector changes. The sensor calculates the amount of change based on the new laser line position on the detector. The process is continued until a complete description is achieved.
|NASA's new portable 3D laser scanner is small enough to get into tight spots to collect data, which it then transmits to a laptop computer for 3D analysis.|
Although laser triangulation itself is not new, NASA engineers put a novel spin on the technique in order to compensate for one of its common drawbacks: a shadowing effect on scanned images caused by the way the laser has to be angled to the target. "We solved this problem by adding a second laser at a complementary angle to the first and combining the images from both lasers into the final dataset for the scan," says Joseph Lavelle, a senior project engineer at NASA Ames. The complete scan of a 3- by 3-inch area, including the flaw identification and measurement processes, takes less than 20 seconds.
By far, the most significant challenge the engineers faced in the design of the new scanner was getting it down to size. Because it had to be small enough to fit around the scaffolding that surrounds the shuttle orbiter during its post-landing maintenance, portability was a key design objective. "We started off with a very large bench-top system that weighed about 50 pounds," says Lavelle, which is typical for scanning devices capable of the speed and resolution of the new device. But thanks to advances in digital camera laser diode technology, as well as "clever circuitry and custom packaging," the design team was able to shrink the scanner down to about 5 by 9 inches and approximately three pounds, including on-board batteries.
Although the device was designed specifically to measure the volume of surface flaws on the shuttle's thermal protection tiles, the underlying technology is expected to be generalized to suit countless other manufacturing applications in which scanner speed, accuracy, and portability are vital. To date, says Lavelle, "We've had inquiries from industries wanting a device to inspect the seals on their hardware, and from the integrated circuit industry needing a way to measure the depth and volume of components as well as the placement of components on printed circuit boards." He notes that plans are on tap for testing the scanner's suitability for those tasks.
The new scanning technology is still a work in progress, and as such there are a number of potential enhancements on the agenda. "We are currently addressing the difficulty of measuring flaws on significantly curved surfaces," says Lavelle. In addition, he notes, "We would like to be able to scan over a larger area than 3 by 3 inches, but this presents a major problem considering the size of the instrument and its ability to fit into small areas around the maintenance scaffolding encompassing the orbiters."
|Flaw analysis tools characterize the damage to a given space shuttle tile based on its 3D scan information. The software locates the damage and generates a 3D image that indicates the size and depth of the flaw. |
Toward this objective, the design team envisions the attachment of a scanner head to the end of a robot that scans over the entire orbiter surface. On the short-term "to do" list, planned enhancements include wireless communication between the scanner and the computer to eliminate the cable and the addition of rechargeable batteries and a more ergonomic case.
Diana Phillips Mahoney is chief technology editor of Computer Graphics World. |
Hot Gas in Galactic Center: A 130 light year region of the center of the Milky Way: Located about 26,000 light years from Earth, the region contains a supermassive black hole, hot gas, and thousands of X-ray sources.
(NASA/CXC/UCLA/MIT/M.Muno et al.)
Caption: This X-ray image was produced by combining a dozen Chandra observations made of the central region of the Milky Way. The colors represent low (red), medium (green) and high (blue) energy X-rays. Chandra's unique resolving power has allowed astronomers to identify thousands of point-like X-ray sources due to neutron stars, black holes, white dwarfs, foreground stars, and background galaxies. What remains is a diffuse X-ray glow extending from the upper left to the lower right, along the direction of the disk of the Galaxy. The Chandra data indicate that the diffuse glow is a mixture of 10-million-degree Celsius gas and 100-million-degree gas. Shock waves from supernova explosions are the most likely explanation for heating the 10-million degree gas, but how the 100-million-degree gas is heated is a mystery.
Scale: Image is 16 arcmin per side
Chandra X-ray Observatory ACIS Image |
Exploring Creation With Physical Science
This course provides a detailed introduction to our physical environment and the basic laws that make it work. The broad scope of this course includes the earth's atmosphere, hydrosphere, and lithosphere as well as weather, motion, Newton's Laws, gravity, the solar system, atomic structure, radiation, nuclear reactions, stars, and galaxies. This course is intended to precede high school biology and should be taken simultaneously with pre-algebra.
Special features include:
- Engaging, easy-to-perform experiments
- High-quality, full-color illustrations
- Access to additional online resources for more advanced learners
- Appendix of review questions
This course comes in 2 volumes: a hardcover student text that contains all student materials, study questions, photos, and illustrations and a softcover volume that contains answers to module study guides, tests, and test solutions.
Author: Dr. Jay Wile
Format: Hardcover student text and softcover teacher manual with solutions and tests
$85 for Set
$65 for Text
Full Course on 2 CD-ROMs: $65
Companion CD-ROM: $15
MP3 Audio CD: $15 |
Daniel Boone was a legendary American pioneer who helped explore and settle Kentucky. He gained fame as a trailblazer and hero of the American West.
Born in 1734 near Reading, Pennsylvania, Boone became skilled at hunting and trapping during his childhood. Around 1767 he began making trips into Kentucky, following a trail over the Appalachian
*See Names and Places at the end of this volume for further information.
Mountains. In 1775 Boone and a group of men built the Wilderness Road, running from eastern Virginia to Kentucky across the Cumberland Gap, a pass in the Appalachian Mountains. That same year Boone built the settlement known as Boonesboro on the Kentucky River. He defended this fort against Cherokee attacks many times.
From his youth, Boone had earned a reputation as a brave frontiersman who explored unsettled regions and saved himself from dangerous situations. According to legend, Boone was captured by Indians one night and taken to their camp. He was able to escape by dawn and cut three notches in a tree to mark the spot. Captured by Indians on another occasion, Boone and his companions were released when the pioneer tricked them into thinking he could swallow a knife. Another story says Boone killed a she-bear with his knife just as the animal was attacking him.
Daniel Boone's fame spread across the country during his lifetime, and his legend continued to grow after his death. In 1823 the English poet Lord Byron wrote about him in his masterpiece Don juan. The Society of the Sons of Daniel Boone, which later became part of the Boy Scouts of America, was founded in 1905 to teach children about the outdoors. |
Elastic is so ubiquitous today that we barely give it a second thought. Like paper clips and zippers, we simply expect it to work without ever wondering what it is, how it's made or what people did before it existed. Take the elastic waistband. In fact, fetch a pair of underwear (preferably clean) from your bedroom and give them a good once-over. You'll notice the familiar stretch of the band followed by the satisfying springing action as it returns to its original shape. It's like a rubber band, but not. When you put your hands on a rubber band, you touch, well, raw rubber. When you do the same with an elastic waistband, you touch fabric.
Believe it or not, the briefs and boxers so common today, equipped with elastic waistbands, weren't invented until the 1930s and 1940s. Before then, people had to find others mechanisms to hold their undergarments in place.
- First came the loincloth, made of leather, wool or linen.
- Then, in the Middle Ages, people slipped into trouser-like braies, lacing them to their waists and legs.
- Eventually, simple, adjustable underpants made of cotton, linen or silk replaced braies. These featured buttons in the front and cinch ties on the side.
- Union suits -- the union of top and bottom undergarments -- were also popular with men and women from the time of their invention in the late 19th century to the early 20th century. They buttoned up the front and had a rear flap known as the "access hatch."
- Finally, in the 1940s, manufacturers such as Hanes began replacing cinch ties and button yokes with elastic waistbands.
What took so long? Some of it was a sort of fashion inertia -- if it ain't broke, don't fix it -- but some of it was industrial necessity. Textile manufacturers either had to adapt their operations to produce elastic or find partners that could supply it economically. Either way, making elastic looked no different than making other woven fabrics. It required a loom, which was a machine that allowed lengthwise threads known as the warp to be interlaced with widthwise threads known as the weft. In normal woven fabric, those threads would consist of yarn derived from natural fibers, such as cotton or wool. But in elastic, strands of yarn were laced together with strands of natural or synthetic rubber.
Today, automated looms handle the weaving process, though the results are the same: a stretchy fabric that can be incorporated into an array of garments. So far, we've focused on the elastic waistbands found in boxers and briefs because they make a convenient example. But elastic finds its way into everything from bras and belts to suspenders and flex-waist trousers. Even the ever-handy shock cord, or bungee, begins its life in a textile manufacturing plant.
Cut into any of these stretchable items, and you'll find one common element: fine rubber threads or thick rubber bands just like the ones you use in your office or kitchen. Interestingly, rubber bands are not ancient inventions. Like the waistbands that contain them, rubber bands are a snappy, modern success story. |
Genetic engineering, also called genetic modification, is the direct human manipulation of an organism's genome using modern DNA technology. It involves the introduction of foreign DNA or synthetic genes into the organism of interest. The introduction of new DNA does not require the use of classical genetic methods.
An organism that is generated through the introduction of recombinanat DNA is considered to be a genetically modified organism. The first organisms genetically engineered were bacteria in 1973 and then mice in 1974. Insulin-producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994.
The most common form of genetic engineering involves the insertion of new genetic material at an unspecified location in the host genome. This is accomplished by isolating and copying the genetic material of interest using molecular cloning methods to generate a DNA sequence containing the required genetic elements for expression, and then inserting this construct into the host organism. Other forms of genetic engineering include gene targeting and knocking out specific genes via engineered nucleases such as zinc finger nucleases or engineered homing endonucleases.
Genetic engineering techniques have been applied in numerous fields including research, biotechnology, and medicine. Medicines such as insulin and human growth hormone are now produced in bacteria, experimental mice such as the oncomouse and the knockout mouse are being used for research purposes and insect resistant and/or herbicide tolerant crops have been commercialized. Genetically engineered plants and animals capable of producing biotechnology drugs more cheaply than current methods are also being developed and in 2009 the FDA approved the sale of the pharmaceutical protein antithrombin produced in the milk of genetically engineered goats.
Applications of Genetic Engineering
Ø Other uses include material science
Our team of experts provides different services for the students
· Email based assignment help- for those heavy and cumbersome assignments.
· Homework based assignment help- to provide homework to the generation of today’s busy world.
· Project help- Design the project given to the students.
· Online tutoring sessions- To provide complete and vast knowledge regarding the topic without taking any tensions of going through various books.
The team of abc.com comprises of those tutors who have dignified place when talk about the subject of Chemical Engineering. They are extremely talented and provide quality education to the students. |
MathematicsMathematics is the science of patterns and relationships. It is the language and logic of our technological world. Mathematical power is the ability to explore, conjecture reason logically, and use a variety of mathematical methods effectively to solve problems.
Whether you are making or giving change, calculating the numbers of days until your birthday, or figuring out your grade, mathematics is a constant and important part of our daily lives. The ultimate goal of the math program in Dearborn is for all students to develop the mathematical power to participate effectively as citizens and workers in our contemporary world.
The Everyday Math series is used in Dearborn elementary schools. Pre-algebra is introduced beginning in sixth grade, and students continue through a high school course of study that incorporates Algebra I, Geometry, and Algebra II. Additionally, students can opt to take higher level electives, including Physics, Statistics, and Advanced Placement classes, either at their home buildings or through intensive programs offered at the Dearborn Math, Science, and Technology Center.
Secondary math information coming soon.
ScienceScience and its applications play a significant role in our everyday lives, from the challenge of developing vaccines to finding alternative energy sources to exploring Mars. In Dearborn, the ultimate goals of the science program are for all students to understand their surroundings and comprehend and appreciate the relationships within these surroundings. During the next decade, demand in the United States for scientists and engineers is expected to increase at more than double the rate for all other occupations. The Dearborn Public Schools science program will prepare students to meet the challenges of this ever-changing future.
In Dearborn, in addition to thematic units in grades kindergarten through seven, students study Earth Science in 8th grade, Biology in 9th, and can then opt for either Physics or Chemistry as their required third year of science. Additionally, students can choose a variety of elective classes, offered at all three high school and at Dearborn Center for Math, Science and Technology, including Forensic Science, classes on the environment, and Advanced Placement selections.
Our curriculum is aligned with the Michigan Benchmarks and Standards as mandated by the State. Further information can be found on their website: Michigan Department of Education
Social StudiesSocial studies prepares young people to become responsible citizens. In Dearborn, our social studies program of instruction and assessment incorporates methods of inquiry, involves public discourse and decision making, and provides opportunities for citizen involvement. Each year, students receive instruction that allows them to think and act as historians, geographers, political scientists, and economists. Students are also taught to respect and reflect core democratic values in their daily activities.
Each grade focuses on specific content: Kindergarten, Myself and Others; 1st grade, Families and Schools; 2nd grade, the Local Community; 3rd grade, Michigan Studies; 4th grade, United States Studies; 5th grade, Integrated US History (through 1791); 6th grade, Western Hemisphere Studies;
7th grade, Eastern Hemisphere Studies; 8th grade, Integrated US History (through 1898); 9th grade, World History and Geography; 10th grade, US History (through modern day); 11th grade, Economics (one semester) & Civics (one semester). Students can also choose high school elective classes in Psychology, Sociology, and Advanced Placement selections.
Language ArtsDearborn Public Schools provides instruction for students in grades PK-12 under the Balanced Literacy Framework. Teachers at the elementary level instruct during Readers and Writers Workshop each day. The Daily 5/CAFÉ instructional framework provides opportunities for students to increase their stamina and develop independence during time spent in daily reading and writing. The Daily 5 and CAFE model of instruction was implemented in Dearborn during the 2010-2011 school year (Boushey and Moser, 2006, 2009). The Daily 5s are: Reading to Self, Work on Writing, Read to Someone, Listen to Reading, and Word Work.
CAFÉ is an acronym for Comprehension, Accuracy, Fluency and Expand Vocabulary. The teacher and student select student goals in the appropriate category and the teacher provides strategies for the students to work on. Small group instruction occurs in strategy groups of no more than three students.
The sixth grade implemented Daily 5/CAFÉ the second semester of the 2011-2012 school year. The strategies are incorporated into the 6th grade classrooms where students continue focusing on core Café goals to improve reading achievement.
At the secondary level, the district is implementing Readers Apprenticeship into content area classes to develop literacy skills and engage students with challenging texts. Readers Apprenticeship is a framework of reading strategies that engage students in a range of classroom routines that motivate students and meet content area learning goals.
The framework relies on four interacting dimensions that support reading development: Social, Personal, Cognitive and Knowledge-Building (Fielding, et al 2003).
The 6+1 Traits of Writing framework (Culver, 2002, 2006) provides the structure for teachers to teach writing in grades K-12. The six traits are: Ideas, Organization, Voice, word Choice, Sentence Fluency and Conventions. The “+1” is Presentation. Teachers use the Writers Workshop instructional model to provide mini-lessons on process writing.
Students move through the stages of brainstorming ideas, drafting, revising, editing and publishing.
World LanguagesCredit for two years of the same world language is required for high school graduation beginning with the class of 2016. The high schools offer Arabic, Spanish, French, German, and American Sign Language based on student interest. Mandarin Chinese will be offered beginning in 2012-2013. Middle schools will offer Arabic, Spanish, and/or French based on student interest.
World Language is available at several elementary schools beginning in kindergarten. Instruction in Arabic is provided at Becker Elementary, Henry Ford Elementary, William Ford Elementary, MacDonald Elementary, Miller Elementary and River Oaks Elementary schools. William Ford and Miller Elementary schools were the recipients of a world language grant. Arabic instruction is provided to students in kindergarten through fifth grade in these two schools.
World Language was implemented in selected schools beginning the 2011-2012 school year on a twice weekly basis for kindergarten through third grade students. Fourth grade students will begin receiving twice weekly content area support in the second year of implementation and fifth grade students will begin twice weekly content area support in the third year of implementation. All schools beginning new implementation will follow the same cycle of grade-level implementation. |
Spanning countries across the globe, the antinuclear movement was the combined effort of millions of people to challenge the superpowers’ reliance on nuclear weapons during the Cold War. Encompassing an array of tactics, from radical dissent to public protest to opposition within the government, this movement succeeded in constraining the arms race and helping to make the use of nuclear weapons politically unacceptable. Antinuclear activists were critical to the establishment of arms control treaties, although they failed to achieve the abolition of nuclear weapons, as anticommunists, national security officials, and proponents of nuclear deterrence within the United States and Soviet Union actively opposed the movement. Opposition to nuclear weapons evolved in tandem with the Cold War and the arms race, leading to a rapid decline in antinuclear activism after the Cold War ended.
From its inception as a nation in 1789, the United States has engaged in an environmental diplomacy that has included attempts to gain control of resources, as well as formal diplomatic efforts to regulate the use of resources shared with other nations and peoples. American environmental diplomacy has sought to gain control of natural resources, to conserve those resources for the future, and to protect environmental amenities from destruction. As an acquirer of natural resources, the United States has focused on arable land as well as on ocean fisheries, although around 1900, the focus on ocean fisheries turned into a desire to conserve marine resources from unregulated harvesting.
The main 20th-century U.S. goal was to extend beyond its borders its Progressive-era desire to utilize resources efficiently, meaning the greatest good for the greatest number for the longest time. For most of the 20th century, the United States was the leader in promoting global environmental protection through the best science, especially emphasizing wildlife. Near the end of the century, U.S. government science policy was increasingly out of step with global environmental thinking, and the United States often found itself on the outside. Most notably, the attempts to address climate change moved ahead with almost every country in the world except the United States.
While a few monographs focus squarely on environmental diplomacy, it is safe to say that historians have not come close to tapping the potential of the intersection of the environmental and diplomatic history of the United States. |
Evidence Shows Earlier Human Arrival in Americas
Researchers discovered the 14,000-year-old DNA in pieces of dried feces — called coprolites — unearthed in a cave in Oregon, according to a study published this week in the journal Science.
“This is the earliest direct evidence of a human presence in the Americas,” Eske Willerslev, director of the University of Copenhagen’s Center for Ancient Genetics and one of the study’s authors, told the Boston Globe.
Until about a decade ago, scientists thought that the earliest humans in the Americas arrived via a land bridge from Asia around 13,000 years ago. That bridge, which was uncovered during the last ice age, disappeared when the climate warmed and sea level rose again.
The humans, called the “Clovis people” after a site in New Mexico where they were first found, spread throughout North America — their remains, with their trademark stone spearheads, have been found through the continent.
Some scientists, however, have been puzzled by how quickly the Clovis people appear to have spread throughout North America. The new findings may help explain it, by providing evidence that humans reached the continent much earlier than previously thought.
The finding also lends weight to the theory that early North Americans might have accessed different parts of the continent via water routes — taking boats down the Pacific coast — because if humans arrived in North America during the ice age, overland routes through what is now Canada and the Northern United States would have been blocked by glaciers.
Other recent archeological excavations, including one in Monte Verde, Chile, have also provided evidence that pre-Clovis humans may have inhabited the Americas. But the new study is the first to find actual human DNA.
Human excrement itself doesn’t contain any DNA, but it does contain DNA-holding flakes of tissue from the intestine.
“People shed gut tissue just like they shed skin flakes,” lead author M. Thomas Gilbert told the Boston Globe.
The researchers used two methods to analyze the DNA. Radiocarbon dating put its age at more than 14,000 years old, and an analysis of the mitochondrial DNA — a part of the DNA that is passed through the maternal line — suggested that the humans descended from people who came from Northeast Asia.
Some scientists have cautioned that the feces could have been from dogs rather than humans, or could have been contaminated by DNA from later humans.
But most researchers in the field are convinced.
University of California, Davis anthropologist David Glenn Smith says the work was “a carefully designed and comprehensive study.”
“I am convinced of [the researchers'] evidence for the pre-Clovis origins,” he told the San Francisco Chronicle.
And, according to study author and archeologist Dennis Jenkins, of the University of Oregon, what matters is that the feces contains human DNA, not that it was human feces.
“Whether the coprolites are human or canine is irrelevant, since for a canine to swallow human hair, people had to be present,” he told the Boston Globe. “Any way you cut the poop, people would have been present at the site.” |
5e Lesson Plan Model
Many of my science lessons are based upon and taught using the 5E lesson plan model: Engage, Explore, Explain, Elaborate, and Evaluate. This lesson plan model allows me to incorporate a variety of learning opportunities and strategies for students. With multiple learning experiences, students can gain new ideas, demonstrate thinking, draw conclusions, develop critical thinking skills, and interact with peers through discussions and hands-on activities. With each stage in this lesson model, I select strategies that will serve students best for the concepts and content being delivered to them. These strategies were selected for this lesson to facilitate peer discussions, participation in a group activity, reflective learning practices, and accountability for learning.
The Out of This World-A Journey Through Our Solar System unit focuses on students recognizing that Earth is a part of the “solar system” that includes the sun, planets, moons, and stars and is the third planet from the sun. Through models, investigations, graphing, and computer simulations, students learn that Earth revolves around the sun in a year’s time, and rotates on its axis once approximately every 24 hours. They make connections between the rotation of the earth and day/night, and the apparent movement of the sun, moon, and stars across the sky, as well as changes that occur in the observable shape of the moon over a month. The unit wraps up as students learn about the brightness of stars, patterns they create in the sky, and why some stars and constellations can only be seen at certain times of the year.
This lesson takes place over a two day period.
In the lesson, Tracking Time With Shadows, I begin by covering the clock to bring meaning to our discussion on how people knew what time was before clocks. I introduce the term shadow and students talk about what they know about them and then practice making some with flashlights. This leads to a discussion on how shadows can help identify the time of day and why they change throughout the day. Class moves outside to where students take part in observing, measuring, noting the time, and marking the position of their shadow to develop an understanding of the apparent motion of the sun and the passing of time. Students keep track of the data, then graph it, and analyze it. They use it as evidence to write an evidence based argument as to why the sun appears to move across the sky.
Next Generation Science Standards
This lesson will address and support future lessons on the following NGSS Standard(s):
Students are engaged in the following scientific and engineering practices
5.) Using Mathematics and Computational Thinking: Student use their shadow lengths and times data and organize them on a line graph to reveal patterns that suggest the sun is moving. They use this graph to construct an evidence based argument that Earth' rotation in relation to the sun's position is what causes the sun to appear to move.
The Tracking Time With Shadows lesson will correlate to other interdisciplinary areas. These Crosscutting Concepts include:
1.) Patterns: Students use their own shadow models to reveal patterns changes related to time and use these patterns to explain why they repeat daily.
2.) Cause and Effect: Students conduct an investigation to determine the effect of Earth's rotation in relation of the sun's position to explain why the sun appears to move across the sky.
Disciplinary Core Ideas within this lesson include:
ESS1.A: The Universe and its Stars
ESS1.B: Earth and the Solar System
Classroom Management Considerations
Importance of Modeling to Develop Student
Responsibility, Accountability, and Independence
Depending upon the time of year, this lesson is taught, teachers should consider modeling how groups should work together; establish group norms for activities, class discussions, and partner talks. In addition, it is important to model think aloud strategies. This sets up students to be more expressive and develop thinking skills during an activity. The first half of the year, I model what group work and/or talks “look like and sound like.” I intervene the moment students are off task with reminders and redirection. By the second and last half of the year, I am able to ask students, “Who can give of three reminders for group activities to be successful?” Who can tell us two reminders for partner talks?” Students take responsibility for becoming successful learners. Again before teaching this lesson, consider the time of year, it may be necessary to do a lot of front loading to get students to eventually become more independent and transition through the lessons in a timely manner.
EXPLORE TEAMS (Pre-Set)
For time management purposes, I use “lab rats ” where each student has a number on the back of his or her chair, 1,2,3,4 (students sit in groups of 4)and displayed on the board. For each activity I use lab rats, I switch up the roles randomly so students are experiencing different task responsibilities which include: Director, Materials Manager, Reporter, and Technician. It makes for smooth transitions and efficiency for set up, work, and clean-up.
Activating Prior Knowledge:
(For the beginning of this lesson, I cover the clock in the classroom.)
I begin asking students: "Does anyone know what time it is?" I look to see who glances at the clock and realize it is covered. Then I ask, "how do you think people knew what time it was before clocks were invented?" We have a brief discussion on their thoughts.
I share that before there were clocks people used shadows to tell time. I ask them to think about the term shadow and what they know about shadows. They turn and talk within their groups. Then I ask groups to share what they discussed. As I listen to ideas, I post them on the board. Then I ask, "How are shadows created?" I am looking for students to identify that light and an object are needed to create shadows.
Connecting their Ideas
To help students make this concrete, I hand out a task card ,large white paper, a cup, and a flashlight. They take part in holding the flashlight invarious positions to observe the shadow made by the cup on paper. I want them to start developing a sense that the position of light on an object affects a shadow's length and size.
After some time, we discuss the shadows they made and how the amount of light impacted the size of the shadows.
Casting Shadows And Measuring Them in the Sun
I tell students we are going outside to begin measuring our own shadows. I explain our task today is to become familiar with the length a shadow makes at certain times of the day. We want to discover how the position of the sun at certain times of the day affects a shadow's size.
I pair each student up. Each pair receives a tracking shadows recording sheet, compass, measuring tape, and chalk. I review the their task and expectations.
Once we are outdoors, I demonstrate how to accurately trace and measure a shadow. I instruct students to find an area to work (I make sure they are spread out enough for working space purposes.) Next, they use the compass to locate north and indicate it with a mark. Since we live in the Northern hemisphere and the sun appears to rise in the east and set in the west; therefore marking the north will give students a clear reading of the sun's apparent movement across the sky.
Then, one student stands facing toward the North while the other student traces the shadow, measures its length, and writes the time on their tracking shadows recording sheet. They swap spots and repeat.
We repeat this process three more times over the week (different times of the day.)(We have a rotating four day schedule, so each day, each class sees me at different times.) Each time,they stand back in the exact spot, trace and measure the shadow, and record the time. They note the position of the sun in the sky, is it on their left, above them, or their right. I want them to start thinking about how the position of the sun has changed since the first measurement and how it impacts the length of this new measurement.
Analyzing Our Data
We engage in a discussion on why shadow lengths change at different times of the day. I ask them questions to create discussion.
We talk about how the length of their shadows is relevant to the sun's position in the sky. I explain that their position never changed, meaning each time they casted their shadow, they stood in the same spot, facing the same direction, yet it never changed. It was the sun's position that appeared to change throughout the day.
Evidence Based Analysis
Students use their data to answer questions related to their outdoor shadow activity and graph. I use these questions to help them summarize their understanding and demonstrate their understanding of how the Earth's rotation causes the sun to appear to move across the sky and accounts for the passing of time.
If times runs out, students finish them for homework. I collect them and use for as a formative assessment. |
It is essential for filmmakers to understand story structure, dramaturgical principles, and the background of both artistic and technical choices. That’s what this course is all about.
Via in-class film screenings and follow-up discussions and presentations, this class analyzes different film structures and the trend-setting elements of cinema. Students learn how to convey crucial information visually and how to enhance drama through the structure of cause and effect. They also learn what makes scenes and movies credible and complex.
Students get to know how a cinematographer augments the vision of the director and how a story is told by using images without dialogue. The different cinematographic styles and tasks are illustrated through specific examples.
By analyzing film rhythm, students will learn about the editing techniques that best match different dramatic structures. The class will see examples of dramatic montage, parallel editing, and excitement-increasing editing.
When examining sound and music, the focus is on their dramatic power and their role in storytelling and character forming. |
Studying elephants' long-distance communications
Stanford biologist Caitlin O’Connell-Rodwell and her colleagues conducted an elaborate elephant communication experiment in Namibia’s Etosha National Park in July 2004. Their findings suggest that elephants produce powerful, low frequency vocalizations that travel through the ground. These seismic signals might be used to find distant mates or identify potential predators.
In this video, Caitlin and husband Tim Rodwell demonstrate how wild elephants react when acoustic recordings are converted into seismic vibrations—a daytime experiment in which the sounds of frightened elephants are played back underground, and a nighttime test using a synthesized warble tone. |
Archaeologists in Poland believe they have found a vampire grave near the town of Gilwice in southern Poland. The skeletons were found with their heads removed and placed between their legs — a ritualistic practice designed to keep the dead from rising up.
In addition, the remains were found with no jewelry, belt buckles, or anything that would indicate a traditional burial. Rituals like these were commonly practiced by Slavic peoples — what Poles call “antywampiryczny” — in the decades following the adoption of Christianity by pagan tribes.
Another possibility is that this was just a standard execution. The victims were were hanged and simply left on the gallows until they naturally fell down.
The skeletons are believed to date from around the 16th or 17th centuries.
Indeed, vampire graves are not so uncommon. Archaeologists recently exhumed suspected vampire skeletons in Bulgaria — including iron rods thrust through the chest. Incredibly, ritual staking can be traced as far back as 1,500 years ago. Going even further back in time, archaeologists recently found a 4,000 year-old grave in the Czech Republic in which the skeleton had been weighed down at the head and chest by two large stones. The people of 8th century Ireland curbed zombie uprisings by ramming rocks into the mouths of their dead.
National Geographic offers some further insights:
Most archaeologists now think that a belief in vampires arose from common misunderstandings about diseases such as tuberculosis, and from a lack of knowledge about the process of decomposition.
Although most 19th-century Americans and Europeans were familiar with changes in the human body immediately following death, they rarely observed what happened in the grave during the following weeks and months.
For one thing, rigor mortis eventually disappears, resulting in flexible limbs. For another, the gastrointestinal tract begins to decay, producing a dark fluid that could be easily mistaken for fresh blood during exhumation—creating the appearance of a postprandial vampire. |
Date: Summer 2012
Why do engineers often use superheated steam to transfer heat?
There are several reasons:
1. It has high heat content (and heat content is tunable with pressure)
2. It is made of water, which is cheap and readily available (and other than being hot, is pretty safe stuff)
3. It is easy (and cheap) to pump (gas) compared with other heat transfer fluids
4. It readily transfers heat owing to it being a vapor (conduction and convection)
Hope this helps,
Engineers use superheated steam to transfer heat because it is the most efficient means to do so.
Some nuclear reactor designs use liquid sodium to transfer heat, but that is a much more hazardous (corrosive) and expensive process.
We can go down the list of other heat transfer mediums, but steam always comes out to be the most efficient.
Click here to return to the Engineering Archives
Update: November 2011 |
Temples existed from the beginning of Egyptian history, and at the height of the civilization were present in most of its towns. They included both mortuary temples to serve the spirits of deceased pharaohs and temples dedicated to patron gods, although the distinction was blurred because divinity and kingship were so closely intertwined. The temples were not primarily intended as places for worship by the general populace, and the common people had a complex set of religious practices of their own. Instead, the state-run temples served as houses for the gods, in which physical images which served as their intermediaries were cared for and provided with offerings. This service was believed to be necessary to sustain the gods, so that they could in turn maintain the universe itself. Thus, temples were central to Egyptian society, and vast resources were devoted to their upkeep, including both donations from the monarchy and large estates of their own. Pharaohs often expanded them as part of their obligation to honor the gods, so that many temples grew to enormous size. However, not all gods had temples dedicated to them, as many gods who were important in official theology received only minimal worship, and many household gods were the focus of popular veneration rather than temple ritual.
The earliest Egyptian temples were small, impermanent structures, but through the Old and Middle Kingdoms their designs grew more elaborate, and they were increasingly built out of stone. In the New Kingdom, a basic temple layout emerged, which had evolved from common elements in Old and Middle Kingdom temples. With variations, this plan was used for most of the temples built from then on, and most of those that survive today adhere to it. In this standard plan, the temple was built along a central processional way that led through a series of courts and halls to the sanctuary, which held a statue of the temple’s god. Access to this most sacred part of the temple was restricted to the pharaoh and the highest-ranking priests. The journey from the temple entrance to the sanctuary was seen as a journey from the human world to the divine realm, a point emphasized by the complex mythological symbolism present in temple architecture. Well beyond the temple building proper was the outermost wall. In the space between the two lay many subsidiary buildings, including workshops and storage areas to supply the temple’s needs, and the library where the temple’s sacred writings and mundane records were kept, and which also served as a center of learning on a multitude of subjects.
Theoretically it was the duty of the pharaoh to carry out temple rituals, as he was Egypt’s official representative to the gods. In reality, ritual duties were almost always carried out by priests. During the Old and Middle Kingdoms, there was no separate class of priests; instead, many government officials served in this capacity for several months out of the year before returning to their secular duties. Only in the New Kingdom did professional priesthood become widespread, although most lower-ranking priests were still part-time. All were still employed by the state, and the pharaoh had final say in their appointments. However, as the wealth of the temples grew, the influence of their priesthoods increased, until it rivaled that of the pharaoh. In the political fragmentation of the Third Intermediate Period, the high priests of Amun at Karnak even became the effective rulers of Upper Egypt. The temple staff also included many people other than priests, such as musicians and chanters in temple ceremonies. Outside the temple were artisans and other laborers who helped supply the temple’s needs, as well as farmers who worked on temple estates. All were paid with portions of the temple’s income. Large temples were therefore very important centers of economic activity, sometimes employing thousands of people.
Associated LOST Characters
The Temple (Others’ Sanctuary)
First Seen: 6×02 – “LA X, Part 2” | Featured: 6×06 – “Sundown“
The Temple is a sanctuary in the Island’s Dark Territory where many of the Others used to live. A massive, crumbling wall half a mile away surrounds it, and an extensive network of ancient tunnels and chambers exists beneath it. One particular chamber has a connection with the Monster. The Man in Black attacked the temple in his smoke monster form shortly before his death, killing all inside who did not join him . (“Sundown”) The Temple has subsequently been abandoned. |
What is Williams syndrome?
Williams syndrome is a rare genetic disorder that affects a child's growth, physical appearance, and cognitive development. People who have Williams syndrome are missing genetic material from chromosome 7, including the gene elastin. This gene's protein product gives blood vessels the stretchiness and strength required to withstand a lifetime of use. The elastin protein is made only during embryonic development and childhood, when blood vessels are formed. Because they lack the elastin protein, people with Williams Syndrome have disorders of the circulatory system and heart.
How do people get Williams syndrome?
A deletion is caused by a break in the DNA molecule that makes up a chromosome. In most cases, the chromosome break occurs while the sperm or egg cell (the male or female gamete) is developing. When this gamete is fertilized, the child will develop Williams syndrome. The parent, however, does not have the break in any other cells and does not have the syndrome. In fact, the break is usually such a rare event that it is very unlikely to happen again if the parent has another child.
It is possible for a child to inherit a broken chromosome from a parent who has the disorder. But this is rare because most people with Williams syndrome do not have children.
What are the symptoms of Williams syndrome?
The most common symptoms of Williams syndrome are intellectual disability, heart defects, and unusual facial features (small upturned nose, wide mouth, full lips, small chin, widely spaced teeth).
Other symptoms include low birth weight, failure to gain weight appropriately, kidney abnormalities, and low muscle tone.
People with this syndrome also exhibit characteristic behaviors, such as hypersensitivity to loud noises and an outgoing personality.
How do doctors diagnose Williams syndrome?
Doctors can identify the syndrome by its distinctive physical characteristics. They can confirm the diagnosis by using a special technique called FISH (fluorescent in situ hybridization).
The chromosomal deletion that causes Williams Syndrome is so small that it cannot be seen in a karyotype. The deletion can be observed, however, with FISH. This technique allows DNA sequences to be labeled with a fluorescent chemical (called a probe) that lights up when exposed to ultraviolet (UV) light. The Williams Syndrome deletion can be detected by labeling the elastin gene with a fluorescent probe. The gene will light up under a UV light only if it is present; a lack of signal indicates a deletion.
How is Williams syndrome treated?
There is no cure for Williams syndrome. Patients must be continually monitored and treated for symptoms throughout their lives.
Interesting facts about Williams syndrome
One out of every 10,000 babies is born with Williams syndrome.
Williams syndrome is considered a microdeletion syndrome, because the deletion is too small to be seen with a microscope (fewer than 5 million bases of DNA are deleted).
Deletions that happen during egg and sperm formation are caused by unequal recombination. Recombination normally occurs between pairs of chromosomes during meiosis. If the pairs of chromosomes don't line up correctly, or if the chromosome breaks aren't repaired properly, the structure of the chromosome can be altered. Unequal recombination occurs more often than usual at this location on chromosome 7, likely due to some highly repetitive DNA sequence that flanks the commonly deleted region. |
Python descriptors have been around a long time—they were introduced way back in Python 2.2. But they're still not widely understood or used. This article shows how to create descriptors and presents three examples of use. All three examples run under Python 3.0, although the first two can be back-ported to any Python version from 2.2 onward simply by changing each class definition to inherit object, and by replacing uses of str.format() with the % string formatting operator. The third example is more advanced, combining descriptors with class decorators—the latter introduced with Python 2.6 and 3.0—to produce a uniquely powerful effect.
What Are Descriptors?
A descriptor is a class that implements one or more of the special methods, __get__(), __set__(), and __delete__(). Descriptor instances are used to represent the attributes of other classes. For example, if we had a descriptor class called MyDescriptor, we might define a class that used it like this:
class MyClass: a = MyDescriptor("a") b = MyDescriptor("b")
(In Python 2.x versions, we would write class MyClass(object): to make the class a new-style class.) The MyClass class now has two instance variables, accessible as self.a and self.b in MyClass objects. How these instance variables behave depends entirely on the implementation of the MyDescriptor class—and this is what makes descriptors so versatile and powerful. In fact, Python itself uses descriptors to implement properties and static methods.
Now we'll look at three examples that use descriptors for three completely different purposes so that you can start to see what can be achieved with descriptors. The first two examples show read-only attributes; the third example shows editable attributes. None of the examples covers deletable attributes (using __delete__()), since use cases are rather rare. |
The morning of 30 August 2012 saw an Atlas 5 rocket launch of the twin Radiation Belt Storm Probes, the second spacecraft mission in NASA’s Living with a Star program. The probes settled into an elliptic orbit that cut through Earth’s radiation belts, home to highly variable populations of energetic particles dangerous to astronauts’ health and spacecraft operation. Renamed the Van Allen Probes soon after launch, the spacecraft are equipped with instruments designed to determine how these high-energy particles form, respond to solar variations, and evolve in space environments.
During their prime mission, the Van Allen Probes verified and quantified previously suggested energization processes, discovered new energization mechanisms, revealed the critical importance of dynamic plasma injections into the innermost magnetosphere, and used uniquely capable instruments to unveil inner radiation belt features that were all but invisible to previous sensors.
Now, through an extended mission that began 1 November 2015, the Van Allen Probes will advance understanding of the dynamics of near-Earth particle radiation. The overarching objective of this extended mission is to quantify the mechanisms governing Earth’s radiation belt and ring current environment as the solar cycle transitions from solar maximum through the declining phase.
The Van Allen Probes mission extends beyond the practical considerations of the hazards of Earth’s space environment. Twentieth century observations of space and astrophysical systems throughout the solar system and out into the observable universe show the universality of processes that generate intense particle radiation within magnetized environments such as Earth’s. Earth’s radiation belts are a unique natural laboratory for developing our understanding of the particle energization processes that operate across the universe.
Effects of the Solar Cycle Decline
The sunspot number reached a peak in April 2014. From historical measurements, we can expect that radiation belt activity will keep intensifying with the decline of the solar cycle: The biggest radiation belt enhancements during geomagnetic storms of two previous solar cycles occurred in their declining phase. As the solar cycle wanes, high-speed solar wind streams become more prominent compared to the solar coronal mass ejections that tend to prevail during solar maximum. Not surprisingly, the two biggest geomagnetic storms of this decade occurred last year, on 17 March and 21 June 2015.
The local time positions of the apogees of the Van Allen Probes’ orbits drift westward and complete a full circle around Earth over a period of about 2 years (Figure 1). By the end of the extended mission (roughly June 2019), the Van Allen Probes will be the first inner magnetospheric mission to circle Earth four times, enabling us to quantify how the relative role of various acceleration and loss mechanisms changes with the decline of the solar cycle.
Understanding Local Particle Energization
Particle acceleration mechanisms have been a key focus of the Van Allen Probes mission. The probes have provided the first definitive evidence that, at times, local particle acceleration within the heart of the radiation belts dominates over other processes that invoke transport and adiabatic compression of particle population from distant regions. The local acceleration is attributed to quasilinear particle interactions with electromagnetic waves called whistler waves. Whistler waves transfer energy from copious low-energy particles to sparse high-energy particles.
At the same time, the probes have also discovered highly unexpected nonlinear wave structures in the heart of the radiation belt. Such structures can rapidly energize very low energy (~10 electron volts) electrons up to intermediate energies (~100 kiloelectron volts (keV)), thereby providing a seed population for subsequent acceleration to radiation belt megaelectron volt (MeV) energies by the whistler waves. The probes have also observed whistler waves with unusually large amplitudes that are likely to more rapidly accelerate keV particles to MeV energies with nonlinear processes. A key theme of the probes’ extended mission aims to sort out the relative importance of quasilinear and nonlinear interactions for the buildup of radiation belt intensities.
To determine the relative importance of nonlinear interactions, we need to measure the evolution of the wave fields and particle distributions along field lines. In the extended mission, Van Allen Probes will provide two unique opportunities for such measurements. First, by adjusting the orbital phase of one spacecraft slightly with respect to the other, we can roughly align them along the same magnetic field line and thus sample field-aligned evolution of particles and wave fields.
Coordinating with Japan’s Exploration of Energization and Radiation in Geospace (ERG) spacecraft, planned for launch in summer 2016, will afford us the second opportunity to sample wave interactions simultaneously at different magnetic latitudes. ERG, by design, will sample at higher magnetic latitudes than the probes. Using three-point measurements from ERG and the probes will provide a more global view of wave-particle interactions at different magnetic latitudes, important for quantifying nonlinear effects.
Investigating Particle Loss
Defining particle loss mechanisms is critical to understanding dynamic variability of the radiation belt intensities. The Van Allen Probes and the associated Balloon Array for Radiation-Belt Relativistic Electron Losses (BARREL) have conducted joint experiments for quantifying particle precipitation, which is the scattering of particles from radiation belts into the atmosphere. The BARREL mission launched multiple high-altitude balloons to measure precipitation of relativistic electrons into the atmosphere along field lines that map to the radiation belts. Within the belts, the Van Allen Probes measure the plasma waves that drive these losses.
Exceedingly close correlations have been observed between the so-called whistler-mode “hiss” waves and electron precipitation modulations, suggesting that losses capable of depleting radiation belt intensities can happen globally on time scales as short as 1 to 20 minutes.
Another process that can rapidly deplete radiation belt intensities during geomagnetic storms is the occurrence of magnetosphere distortions that can cause the particles to stream out of the magnetosphere into the interplanetary environment.
It is a goal of the Van Allen Probes extended mission to understand the relative importance of precipitation and interplanetary particle losses. NASA’s Magnetospheric Multiscale (MMS) mission, launched in March 2015, provides an ideal opportunity to observe directly these escaping electrons at the magnetosphere boundary (the magnetopause) while the Van Allen Probes measure inner magnetospheric losses and the processes that drive these losses. During major loss events, the MMS spacecraft will skim the dayside magnetopause region for extended intervals. With its unusually sensitive energetic electron sensors, MMS will directly measure escaping radiation belt electrons.
Ring Current Generation
The buildup of the intermediate energetic ion population (reaching keV) during geomagnetic storms creates a source of hot plasma pressure in the inner magnetosphere that drives the so-called global “ring current” system that encircles Earth. This ring current controls the magnetic field configuration, which in turn governs the motion of radiation belt particles. Energetic ions also provide the energy source for an array of different wave modes that play a significant role in radiation belt particle acceleration and loss.
A surprising discovery of the Van Allen Probes prime mission was that a substantial fraction of hot plasma pressure is produced by dynamic small-scale injections that rapidly (in a matter of minutes) transport hot particles radially into the inner magnetosphere. Such injections were known to be common within the magnetotail but were previously thought to be infrequent in the inner magnetosphere.
The structure and occurrence rate of the injections remain unknown, and the amount of hot plasma transported remains poorly quantified. The extended mission will quantify the properties of small-scale injections in the inner magnetosphere and explore their role in the buildup of hot plasma pressure during storms. This investigation will be greatly facilitated by the recent adjustment of the spacecraft’s orbits, which doubled the cadence of simultaneous two-point, radial-aligned measurements, necessary to quantify the properties of dynamic injections.
The Mission Continues
Over the past 3 years, the Van Allen Probes mission has radically changed our understanding of Earth’s inner magnetosphere and radiation belts. As of October 2015, the Van Allen Probes’ bibliography contains more than 210 publications, including a number of articles published in high-profile journals such as Nature and Science.
With all instruments returning quality data, both spacecraft healthy, and the remaining propellant sufficient to support spacecraft operations well into 2019, we expect many more quality publications and science discoveries from the extended mission. Stay tuned!
A. Y. Ukhorskiy and B. H. Mauk, Johns Hopkins University (JHU) Applied Physics Laboratory (APL), Laurel, Md.; email: [email protected]; D. G. Sibeck, NASA Goddard Space Flight Center (GSFC), Greenbelt, Md.; and R. L. Kessel, JHU APL and NASA GSFC
Correction, 7 April 2016: An earlier version of this article included embedded videos of chorus waves that are not exactly the waves mentioned in the text. The article has been updated to remove these videos and instead hyperlink to a collection of audio clips from the Waves instruments on the twin Van Allen Probes.
Citation: Ukhorskiy, A. Y., B. H. Mauk, D. G. Sibeck, and R. L. Kessel (2016), Radiation belt processes in a declining solar cycle, Eos, 97, doi:10.1029/2016EO048705. Published on 23 March 2016.
Text © 2016. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited. |
This course evaluates the medieval history of Toledo from the era of the Visigoth Kingdom (6th-8th centuries) through its Islamic period (8th to 11th centuries) and into its reintegration into Christian Spain (after 1085 c.e.) In particular, we take note of the cultural and religious transformations that characterized the city with a special effort to understand how many peoples and religions came to settle and live amongst one another. We will virtually-tour the Islamic and Christian structures of the Museo de Santa Cruz, Iglesia de San Román, Sinagoga del Tránsito, Mezquita de Bab al-Mardum, Archivo Municipal de Toledo, and the Archivo Historico de la Nobleza. We examine the Visigoths transition from Christian Arianism to Catholicism and the harsh treatment of the Jewish population. We explore Islamic governance and development of the medieval city of three faiths, with a special interest in its cultural achievements. We will study King Alfonso “The Wise” (1252-1284)’s efforts to characterize himself as the “king of three religions” via his legal codices, the creation of the Cantigas de Santa María, and his intellectual endeavor known as the Toledo School of Translators. We evaluate the robust Jewish and converso noble families of the city and appreciate their intellectual, religious, and economic contributions to Castilian life. We will bear witness to the rise of anti-Jewish blood purity statutes, the creation of the Inquisition, and the expulsion of the Jews. We also briefly introduce and study Spanish manuscripts from the municipal and cathedral archives to make new scholarly breakthroughs relating to the Jewish, Christian, and Muslim interrelations. No knowledge of Spanish is needed to participate in the course or in our transcription efforts. |
John Ruskin Biography, Life, Interesting Facts
Died On :
Also Known For :
Birth Place :
John Ruskin was born on February 8th, 1819. He was a famous English artist that was highly respected during the Victorian Era. He was also a talented drawer, watercolourist, philanthropist and a famed social thinker. Some of the subjects that he wrote on include literature, botany, architecture, political economy, and ornithology. His most remarkable writings include The Stones of Venice and The Seven Lamps of Architecture. In 1869, Ruskin was appointed as the Oxford’s first professor of fine art. He, however, resigned after a decade.
John Ruskin was born on February 8th, in 1819. His birthplace was in London in England. He was the only child of John James Ruskin and Margaret Cox. Ruskin’s father was also interested in art and majored in art collection as his business. He, therefore, had a significant influence on Ruskin’s art career. He was the one who introduced him to Romanticism works of art. Cox, his mother, had a huge impact on the religious upbringing of John Ruskin. She taught him how to read the bible.
Ruskin obtained his early education from home. Private teachers also tutored him. In 1834, he went to school in Peckham. He enrolled for classes that were managed by Thomas Dale, a famous evangelist. Later, in 1836, Ruskin was among Dale students at the King's College in London. During this period, Dale was regarded as the first literature professor at the institution.
During the mid-1830s, John Ruskin had begun publishing short pieces either in verses or prose form in several magazines. Shortly after, he made a publication that would later be identified as the debut volume of his most remarkable works, i.e., Modern Painters – 1843. Through this publication, he talked more about the fact that modern landscape painters were superior over Old Masters that were present in the post-Renaissance period.
Four more volumes of the Modern Painters were released over the subsequent years. His second volume featured symbolism in the art that was expressed via nature. His interest in architecture was also demonstrated through his 1849 publication titled The Seven Lamps of Architecture.
In this work, John Ruskin documented the seven fundamental moral categories in architecture which included sacrifice, power, truth, life, obedience, beauty, and memory. According to Ruskin, these moral categories were merely inseparable from architecture. The King of the Golden River – 1851, was the only fairy tale he wrote and published throughout his writing career.
After this, John Ruskin began working on The Stones of Venice in 1851. This work lasted for two years, and he completed its three volumes in 1853. The first volume was The Foundations followed by The Sea–Stories and The Fall as the third volume.
As a teacher, Ruskin's fame grew more so during the 1850s. He mainly taught his students architecture and art related subjects. In 1857, while in Manchester, he lectured at the Art Treasures Exhibition. His lesson was collected and documented as The Political Economy of Art.
In 1847, John Ruskin got engaged to Effie Gray. This relationship influenced him to write on The King of the Golden River in 1841. During this time, Gray was only twelve years old. The following year in 1848, Ruskin wedded Gray. Unfortunately, their relationship was not consummated, and the two went separate ways after a short period.
John Ruskin later fell deeply in love with Rose La Touche, a ten-year-old girl. When La Touche turned 18, Ruskin proposed to her but she requested him to be patient until she was aged 21. Ultimately, she rejected him.
John Ruskin passed away on January 20th, in 1900. He had been suffering from influenza which led to his death at the age of 80. |
Jump to navigation Jump to search
- Text is a text string containing characters to extract,
- Number_chars represents the number of characters to be extracted.
- RIGHTB(), returns the rightmost characters from a text value.RIGHTB is intended for use with languages that use the double-byte character set (DBCS).
RIGHTB (Text,Number_chars) e.g. RIGHTB("String123",3) displays 123 as a result.
- RIGHTB counts 2 bytes per character when system is set to language supporting DBCS, else counts 1 byte per character.
- RIGHTB works with the languages that use 'Double Byte Character Set' (DBCS).
- Text can be any string containing characters, numbers, symbols, blank spaces etc.
- If argument Text is directly entered in the command, it should be enclosed in double quotes (e.g. "Name").
- Argument Number_chars indicates the number of byte characters to be displayed from a text string. Number_chars should be >0.
- If Number_chars > 'length of text', Calci displays the entire text.
- If Number_chars is omitted, Calci assumes it to be 1.
Consider the following examples as input to RIGHTB() function.
=RIGHTB(A1,4) : Displays last 4 characters in a string from cell A1. Displays #!#> as the output. =RIGHTB(A2,6) : Displays last 6 characters in a string from cell A2. Displays SMITH as the output.
Space Character is also counted. =RIGHTB(A3) : Displays last character in a string from cell A3. Displays 3 as the output. =RIGHTB("Good Morning",20) :Displays the full text string Good Morning as the output.
Need to give examples with characters/language supporting DBCS |
A study from North Carolina State University has revealed for the first time which insects pollinate Venus flytraps. The researchers are also reporting that these carnivorous plants do not eat the insects that pollinate them.
“Everybody’s heard of Venus flytraps, but nobody knew what pollinated them – so we decided to find out,” said study co-author Clyde Seorenson.
Venus flytraps, or Dionaea muscipula, are native to a small region within a 100-mile radius of Wilmington, North Carolina.
“These findings answer basic questions about the ecology of Venus flytraps, which is important for understanding how to preserve a plant that is native to such a small, threatened ecosystem,” said lead author Elsa Youngsteadt. “It also illustrates the fascinating suite of traits that help this plant interact with insects as both pollinators and prey.”
Over the 5 weeks of the plants’ flowering season, researchers captured insects found on the flowers of Venus flytraps at several sites. The experts identify each of the insects and checked to see how much pollen they were carrying from the Venus flytraps, if any.
Only a few of the insects were both frequently spotted on the flowers and carried a lot of pollen, including a checkered beetle, a green sweat bee, and the notch-tipped flower longhorn beetle.
Next, the research team identified prey from over 200 Venus flytraps. The experts discovered that the three pollinator species were never found in the traps, despite the fact that they were among the most common visitors.
“One potential reason for this is the architecture of the plants themselves,” said Youngsteadt. “Venus flytrap flowers are elevated on stems that stand fairly high above the snap traps of the plant, and we found that 87 percent of the flower-visiting individuals we captured – including all three of the most important species – could fly. But only 20 percent of the prey could fly. The pollinator species may simply be staying above the danger zone as they go from flower to flower, making them less likely to be eaten.”
Sorenson said that another possible explanation is that the traps, which are different colors than the flowers, may simply attract different species.
“We don’t yet know if they release different scents or other chemical signals that may also differentiate which portions of the plant are attractive to pollinators versus prey. That’s one of the questions we plan to address moving forward.”
The study is published in the journal American Naturalist.
Image Credit: Elsa Youngsteadt |
Without access to a time machine, we can't head back to witness how the universe began. However, that doesn't mean scientists are clueless. Physicists don't need a time machine - they have math. In fact, with an analog radio set, you too can hear the echos of our universe's beginnings, the expansion theory that we call the Big Bang.
Despite the name, the Big Bang was not an explosion. Instead, it was the rapid change of a tiny point of matter into a vast and expanding universe. Before the Big Bang, there was no universe for an explosion to happen in. The Big Bang literally formed space as it happened. The obvious question is what is the universe expanding into? More on that later. The original term for the theory was a primeval or singularity origin and was proposed by Georges Lemaîtrein in 1931. The main opposing idea was called the Steady State Model, supporters of which believed the universe had no beginning but stretched back in time eternally. Physicist Fred Hoyle supported the steady-state model. On March 28th, 1949, Hoyle gave an interview where he disparagingly referred to the singularity origin as "this big bang idea," later calling it irrational. Nevertheless, the name stuck.
The final Big Bang model was built up by different scientists over decades. Each time physicists worked on their theory they expanded the understanding. After Georges Lemaîtrein published his ideas in 1931, mathematical models were improved by Roger Penrose, Stephen Hawking, and George F. R. Ellis in 1968 and 1970. During the 1970s up to the 1990s, the features of the Big Bang model were characterizing. Then, in 1981 Alan Guth made a breakthrough. He realized there was a time of rapid expansion in the early universe he called inflation. Then, in the 1990s advances in telescope technology allowed physicists to accurate measure the cosmos and find final pieces of evidence for the Big Bang model, and have made the unexpected discovery that the expansion of the universe is accelerating.
There are four pieces of evidence sometimes called "pillars" that prove the Big Bang model. Expansion of the universe.By using powerful telescopes, scientists can see that everything is moving away from us. This does not mean we are at the center of the universe - it means that space itself is expanding. Therefore, we know it was once smaller, and this supports the Big Bang modelCosmic microwavesMicrowaves are a type of radiation. Several studies have proved cosmic microwaves exist in a way that supports the theory that the universe started as a small point of matter and inflated in size. The picture above is an image of the early universe and the cosmic microwaves.
Nucleosynthesis of elementsWhen the cosmos was new, it was mainly hydrogen, helium, and other light elements. Heavier elements later formed inside stars. Equations based on the Big Bang model predict the amounts of light elements so accurately that scientists see it as proof the theory is correct.Galaxy formationGalaxies are spirals of stars. Our solar system is at the edge of the Milky Way which you can see on clear nights - this is the denser center of our galaxy. By using powerful telescopes, scientists can see so far that they are looking back in time because light takes times to travel. We have photographic evidence that early galaxies looked different, meaning the steady-state model of the universe is wrong.
These four pillars of evidence are the most important proof that our universe started as a small and extremely dense point that rapidly expanded.
13.7 billion years ago the universe began. The first moments have scientific names to describe what happened. Each section is very brief. The first was;
During the next few stages, the universe continues to expand and cool down. From 1 second to 20 minutes the universe temperature reduces to 1 billion degrees Celcius. The universe cooled because it was growing in size because the particles were getting further away from one another. At this temperature, the protons and neutrons combine forming the first atoms of the light elements. The primary elements are still the most common in the universe today: Hydrogen, helium, and lithium.
After 300,000 years the universe was 3,000 degrees. Far hotter than today which is minus 240 degrees. The next era is the Dark Age which lasted for 150 million years. There are no stars, but there are protons. The proton particles make the cosmic radiation scientists can see today. If you tune a radio to a frequency without a station, about 1% of the static you hear is this radiation.
Until 300 million years after the cosmos formed there was no light because there were no stars. Stars formed from 300 million to 500 million years after the Big Bang, and continue to form up to today. Before stars, the cosmos was a mixture of hydrogen and other light gases. The atoms were not evenly spread out. Slowly over time gravity pulled atoms closer to one another. Once there was a high enough concentration of hydrogen, the heat of the closely packed hydrogen started nuclear fusion which made stars. The stars pulled towards each other by gravity and made simple galaxies with an oval shape.
The first stars were 100 times larger than our sun and did not last long. When they exploded as supernova heavier elements including carbon was created. This made denser clouds of matter. The matter was affected by gravity, causing it to move and spin.
By eight and a half billion years after the Big Bang, there was enough matter and heavier elements to make planets. Just like the formation of stars gravity pulled atoms of heavier elements together until the heat created by the density of particles started nuclear reactions. Rather than making light by burning hydrogen, the heavier elements joined and made rocky planets and gas giant planets. All planets formed around a star, and the force of gravity causes the planets spin and orbit the star.
Although the expansion of the universe was much faster soon after the Big Bang, it continues to expand today. Scientists once thought the universe would collapse in on itself, possibly starting a new Big Bang in a repeating cycle of universe creation. But now physicists have proved this will not happen. Instead, the universe will continually expand forever. In the far distant future, the universe will have expanded so much that each atom will be too far apart from others for gravity to affect them. When this occurs, no new stars or planets will be able to form, and the universe will go cold and dark.
Although scientists have a detailed understanding of the first moments and evolution of our universe, there are still areas of uncertainty. One of the biggest mysteries is what formed the singularity which led into the Big Bang. Another is what existed before the formation of the universe. These questions may be unanswerable, but there are many theories proposed by theoretical physicists. A common theory is that there are multiple universes, each with slightly different laws of physics. Perhaps with further research and study, we will discover the answers to these mysteries.
This site offers information designed for educational purposes only. The information on this Website is not intended to be comprehensive, nor does it constitute advice or our recommendation in any way. We attempt to ensure that the content is current and accurate but we do not guarantee its currency and accuracy. You should carry out your own research and/or seek your own advice before acting or relying on any of the information on this Website. |
‘Living walls’ can play a significant role in tackling toxic air hot spots in cities, says the report. It highlights the contribution of ‘green building envelopes’, such as moss and vegetated walls, vertical farming and roof gardens.
Worldwide, 3.7 million premature deaths in 2012 were attributed to exposure to poor air quality. Approximately 200,000 of these were in Europe and 900,000 in south-east Asia.
Green envelopes, often dismissed as “architectural window dressing”, can reduce localised air pollution by up to 20% in some locations, rapidly reducing toxic air at street level, says Arup.
“Tackling rising air pollution is a priority to help improve people’s health,” said Arup global landscape architecture leader Tom Armour. “As our cities continue to become built up, ‘grey’ structures, such as walls and roofs, are a source of untapped potential for adapting into green spaces. When well-designed, green envelopes can have a positive impact on tackling air pollution, but can also deliver a wide range of social, economic and environmental benefits to make cities more attractive and healthier places to be.”
The report called Cities Alive: Green Building Envelope reviews green infrastructure schemes across five global cities; London, Los Angeles, Berlin, Melbourne and Hong Kong to quantify the benefits of ‘green building envelopes’. It is the fourth report in the ‘Cities Alive’ series which looks at ways to help shape a better world.
Advanced computer software was used to provide a visual representation of the flow of gasses, and help determine the effectiveness of green building envelopes to reduce pollutant concentrations. The report highlights plant species, such as pine and birch, that are particularly effective because of their ability to capture large quantities of particulate matter, including during winter when pollution concentrations are highest.
The study also highlights that green envelopes can reduce sound levels from emergent and traffic noise sources by up to 10 decibels in certain situations. To the human ear, this could make traffic sound half as loud. Increasing the quantity of vegetation in a city can also reduce temperatures. According to a US study, urban areas with a population exceeding 1 million can be up to 12°C warmer in the evening than surrounding areas, and in particularly dense centres, green infrastructure could reduce air temperature by up to 10°C. Green envelopes can also reduce peak energy consumption in traditional buildings by up to 8%.
As cities become more densely populated and increasing pressure is put on existing parks and open spaces to make way for further development. The report shows how green buildings can play a significant part in reducing urban stress and keeping people connected with nature. Vertical and urban farming are also highlighted as great ways of being able to create community spaces. |
The dynasty’s name comes from the Arabic designation for a slave, mamluk. The ruling class was made up of slave soldiers who had originally been captured among the Turkic peoples in the steppes of southern Russia or among Christians in northern Caucasus. The Mamluk dynasty emerged when some of the Ayyubids’ slave troops revolted in 1250 and took over the Ayyubid lands along the Mediterranean. From then on, the Mamluks’ might was based on a steady stream of slaves, who after being converted to Islam, educated in Arabic, and taught the art of war, supplied the military caste with new commanders. Their descendants and other free men were not, however, allowed to reach society’s highest posts. In contrast to the practice of other Islamic dynasties, succession was usually decided by a coup d’état, often by one of the former sultan’s commanders, and rarely by family ties.
The Mamluks won renown throughout the Islamic world as defenders of the true faith because they repeatedly stopped the advance of the seemingly invincible Mongols. The Frankish Crusaders and Christian principalities in the eastern Mediterranean also had to yield at last to the Mamluks, who were famous for their skill in the use of the lance, the sword, and the bow. As a result, the Mamluk Empire soon stretched all the way from southeastern Anatolia to Sudan and Libya, with Cairo as its center. The holy cities in Arabia were also under Mamluk hegemony. Despite several internal power struggles, Syria and Egypt experienced a period of economic growth under the Mamluks. This was especially due to the empire’s strategic location as a center of trade linking India, southern Europe, Caucasus, and southern Russia. Trade with India, in turn, was taken over by Portuguese ships in the course of the 15th century. At the same time, there was increasing military pressure from the Ottomans in the north. While the Mamluks clung to their traditional weapons, the Ottomans’ use of modern artillery and firearms finally decided the outcome, and the Mamluks were defeated in 1517.
Many sultans and emirs were important builders and patrons who left behind magnificent religious complexes. Their names in cursive script are found in a new monumental way in Islamic art as a decoration, not only on architecture, but also on inlaid metalwork and enameled glass, the products of techniques that flourished in this period. Another distinctive feature is the emergence of heraldic symbols. They are seen on objects made both for the local upper class and for European noble families, since the skills of Mamluk craftsmen were in demand far and wide. |
The topic of anthropogenic climate change always seems to be a "hot" issue, and an article in an upcoming issue of Science will look at the relation of two components of climate change. The authors examine both the timing of CO2 increases relative to temperature increases, and verify a prediction made by climate models.
Over the course of glacial and interglacial cycles, CO2 and temperature have been highly correlated, but many increases in CO2 appear to occur after increases in temperature, rather than vice versa. Most climate scientists have accepted that the climate system is complicated and other forces may have initiated warming, while CO2 merely acted as a positive feedback. Others, however, have used this evidence to suggest that CO2 may not cause increased temperatures.
The current study looks at individual spikes in the Greenland temperature record and compares those to global CO2 concentrations. They find that, from 30 to 90 thousand years ago, major warming events in Greenland (known as Dansgaard Oeschger or D-O events) have been preceded by increases in CO2. For the major D-O events, CO2 typically begin to increase two to four thousand years earlier.
This alone is an important result, but the authors take it a step further by looking at ocean circulation patterns during these D-O events and compare that to climate model predictions. Climate models indicate that specific oceanic events—maximum North Atlantic Deep Water (NADW) shoaling and minimal Southern Ocean stratification—should correspond to maximum rates of CO2 increase.
As with other studies, this one uses δ13C data off the coast of Iberia as a proxy for NADW circulation patterns, and δ15N off the coast of Chile as a proxy for stratification of the Southern Ocean. The authors show that these data have the pattern predicted by climate models, which serves as a further verification of the efficacy of climate models.
Science, 2008. DOI: 10.1126/science.1160832 |
The middle ear is the small space behind the eardrum; this space is usually filled with . Otitis media is an infection of the middle ear that causes inflammation and a build-up of fluid. It is often extremely painful and be be associated with high fever.
Symptoms of otitis media:
- Fever may be present
- Pulling, tugging or rubbing ear
- Slight hearing loss
In most cases the symptoms of a middle ear infection develop quickly and resolve in a few days. In some cases, pus may run out of the ear, this is the fluid that had built up behind the ear drum causing a small hole in the eardrum; this tends to heal up by itself.
Most cases of earache/otitis media in young children (under 5 years of age) are caused by viral infections; your child may also have a runny nose and cough. The Eustachian tube is a small tube that links the middle ear to the back of the throat. Its main job is to regulate air pressure in the ear. Its other function is to drain any fluid or mucus that builds up. The common 'cold' can cause the Eustachian to become blocked, causing a build up of fluid or mucus and resulting in earache.
Most children with otitis media (earache) do no require treatment with antibiotics. Antibiotics rarely speed up recovery and often cause side effects such as rash and diarrhoea. They will also promote the development of antibiotic resistant bacteria in your child.
Antibiotics are usually only considered if your child:
- Is under 6 months of age and has otitis media
- Is between 6 months and 2 years of age with infection in both ears, or with associated symptoms such as altered sleep, fever and overwhelming misery
- Has pus draining from their ear
- Has a serious health condition that makes them more vulnerable to serious infection
In addition, if your child has any features of severe infection (amber or red features above), they will need to be urgently assessed by a healthcare professional
You can help relieve symptoms by:
- Giving your child paracetamol or ibuprofen to help relieve pain
- Encouraging your child to drink plenty of fluids
Is it not possible to prevent ear infections; however, you can do things that may reduce your child's chances of developing the condition:
- Ensure your child is up-to-date with their immunisations
- Avoid exposing your child to smoky environments (passive smoking)
This guidance is written by healthcare professionals from across Hampshire, Dorset and the Isle of Wight. |
The Chicana Movement: Liberation from Oppressive Structures The Chicano student movement began in March of 1968, but it wasn’t until the east Chicano high school students walked out of their decrepit high schools and began to push for changes, that the movement really differentiated itself from the previous Mexican American attempts at achieving equality. These changes were radical to the dominant White – Anglo social structure that controlled many aspects of their lives. The ensuing police repression and brutality only further reinforced the new radical trend in student ideology. A year after the walk out in march 1969, the Crusade for Justice 1 civil rights organization held the National Chicano Youth Liberation Conference at its headquarters
The film prejudice and pride, revealed the struggle of Mexican Americans in the 1960s-1970s. In the film it showed Mexican Americans, frustration by the President discrimination and poverty. In this film I learned about the movement that led to the Chicano identity. This movement sparked, when the farm workers in the fields of California, marched on Sacramento for equal pay and humane working conditions. This march was led by César Chavez and Dolores Huerta.
Mini-Research Paper: Outline and Thesis I. Introduction a. Thesis statement: Jose Angel Gutierrez has been hardly work in order to make the Chicano/Hispanic community successful as he has become a role model in politics because of his active actions in search of equality in education, creation of organizations, and active position regarding the immigration topic. II. Walkouts in high school a. Chicano students striking for equality of education b. Implementation of Mexican-American studies classes c. Recruitment of more Mexican-American teachers and counselors d. Bilingual and bicultural education III. Political action a. Politically active since young age b. Mexican American Youth Organization (MAYO)
The Chicano movement brought unity, nationalism, and cultural pride by addressing social and civil right issues. However, the Chicano social identity that arose in the 1960’s was not inclusive to Chicanas, moreover, it did not acknowledge and encompass the contribution of Central Americans and Asian Mexicans. The Chicano social identity definition needs to be changed to be more inclusive and accommodate all the configurations and diverse expressions of
societies in the world. These sub-cultures include Whites, African Americans, Asians, Irish, Latino, and European among others. Chicano refers to the identity of Mexican-American descendant in the United State. The term is also used to refer to the Mexicans or Latinos in general. Chicanos are descendants of different races such as Central American Indians, Spanish, Africans, Native Americans, and Europeans.
The Sainte-Chapelle is a royal medieval 13th-century Gothic chapel, located near the Palais de la Cité, on the Île de la Cité in the heart of Paris, France. It was built by Louis IX for use as his royal chapel. Sainte-Chapelle was founded by King Louis IX. He constructed it as a chapel for a royal palace and to help him survive during this time period. The palace itself has been removed, leaving just the chapelle.
Wealth of an individual, and their health are two contracting factors in America. Usually if an individual has more wealth they are considered to be healthier. However for the Mexican Americans, this contradicting theory seems to disappear. In the film Becoming American, researchers discovered that immigrant Latinos have the best health, even though they are considered one of the poorest, socially marginalized population. Latino’s are also considered to have the best health among one of the wealthiest communities, which enables them to the Latino paradox.
1. What is the Latino paradox? Why does it exist? a. The Latino paradox was identified by researchers in the 1960s and it notion that Latino immigrants of lower income and education has low rates of mental health issues compared to whites who has higher education and income.
In American history, social equality developments have assumed a noteworthy part for some ethnics in the United States and have shape American culture to what it is today. The effect of social liberties developments is huge and to a degree, they finish the targets that the gatherings of individuals set out to accomplish. The Mexican-American Civil Rights Movement, all the more generally known as the Chicano Movement or El Movimiento, was one of the numerous developments in the United States that set out to acquire fairness for Mexican-Americans (Herrera). At to start with, the development had a frail begin however inevitably the development picked up energy around the 1960's (Herrera). Mexican-Americans, otherwise called Chicanos, started to
Choosing to be a Mexican over American Today I feel more like a Mexican than anything else even though I was born in the united states. I may have papers and be American but hearing other ethnicities call my people immigrants and illegal makes me feel more like an immigrant myself. I feel this way because although I am considered an American I would much rather stand by my people and my culture. I would label myself as a Mexican-American, Latina, person of color, and as a minority. I describe myself as a Mexican-American because I was born and raised in Chicago and from Mexican descent.
Never have I taken my culture into consideration, but I would more than likely classify my culture as Latino/Hispanic. For starters, I was born in a lovely place called Chihuahua, Mexico. This place is the reason I consider myself a Latino. Why is this my culture you ask? My whole daily lifestyle revolves around this Hispanic heritage.
Not only has El Centro De Corazón made a positive impact on the Latino community, but they have also made a positive impact on my mother and me when we were both battling challenges together in a time period of our life. During the time my mother's illness had exceeded and she did not have medical insurance or the certain amount of funds to continue going to her typical clinic. After searching we were able to find this wholesome, non-profit clinic that helps individuals that are going through similar situations she was going through. My mom was thankful to have found this organization because it was the only way she would be able to receive the medical attention (such as blood test, exams, and check-ups) she needed with little cost. All the |
viernes, 3 de septiembre de 2010
Water in Earth's Mantle Key to Survival of Oldest Continents
Earth today is one of the most active planets in the Solar System, and was probably even more so during the early stages of its life. Thanks to the plate tectonics that continue to shape our planet's surface, remnants of crust from Earth's formative years are rare, but not impossible to find. A paper published in Nature Sept. 2 examines how some ancient rocks have resisted being recycled into Earth's convecting interior.
Throughout the world there exist regions of ancient crust, referred to as cratons, which have resisted being recycled into the interior of our tectonically dynamic planet. These geologic anomalies appear to have withstood major deformation thanks to the presence of mantle roots. A mantle root is a portion of Earth's mantle that lies beneath the craton, extending like the root of a tooth into the rest of the underlying mantle.
Just like a tooth, the mantle root of a craton is compositionally different from the normal mantle into which it protrudes. It is also colder, causing it to be more rigid. These roots were formed in ancient melting events and are intrinsically more buoyant than the surrounding mantle. The melting removed much of the calcium, aluminum, and iron that would normally form dense minerals. Thus, these roots act as rafts bobbing on a vigorously convecting mantle, on which old fragments of continental crust may bask in comparative safety.
However, geophysical calculations have suggested that this buoyancy is not enough to stop destruction of the mantle roots. According to these calculations, the hotter temperatures that are widely thought to have existed in Earth's mantle about 2.5 to 3 billion years ago should have warmed and softened up the base of these roots sufficiently to allow them to be gradually eroded from below, leading to their eventual destruction as they were entrained, piece by piece, into the convecting mantle. A stronger viscosity contrast between the root and the underlying mantle is required to ensure preservation.
In the Sept. 2 issue of Nature, Anne Peslier, an ESCG-Jacobs Technology scientist working at NASA-Johnson Space Center and her colleagues David Bell from Arizona State University and Alan Woodland and Marina Lazarov from the University of Frankfurt, published measurements of the trace water content of rocks from the deepest part of a mantle root that offer an explanation for this mystery.
"It has long been suspected, but not proven, that cratonic mantle roots are dryer than convecting upper mantle," explains Bell, an associate research scientist in the School of Earth and Space Exploration and the department of chemistry and biochemistry in ASU's College of Liberal Arts and Sciences. "The presence of very small quantities of water is known to weaken rocks and minerals. During partial melting, such as that experienced by the mantle roots, water -- like calcium, aluminum and iron -- is also removed."
The researchers used samples found in diamond mines of Southern Africa, where the ancient crust of the Kaapvaal craton was pierced about 100 million years ago by gas-charged magmas called kimberlites. These magmas were generated at depths of about 125 miles (200 kilometers) beneath the mantle root and ascended rapidly (in a matter of hours) through the Earth via deep fractures, bringing with them pieces of the rocks traversed, including diamonds. After erupting explosively at the surface, the magmas solidified into the pipe-like bodies of kimberlite rock that were subsequently mined for their diamonds.
The mantle rocks analyzed by the team were transported from a range of depths down to 125 miles (200 km) below the surface, where they had resided since their formation around 3 billion years ago. The samples of rock called peridotite are composed mainly of the mineral olivine, with minor quantities of pyroxenes and garnet. Olivine is, because if its abundance, the mineral believed to control the rheological properties of peridotite.
What Peslier and colleagues found is that beyond a depth of about 112 miles (180 km), the water content of olivines begins to decline with depth, so that the olivine in peridotite samples from the very base of the cratonic mantle root contained hardly any water. That makes these olivines very hard to deform or break up, and may generate the strong viscosity contrast with that geophysical models of craton root stability require.
Why the bottom of the mantle root has dry olivines is still a matter of speculation. One possibility, suggested by Woodland, is that reducing conditions thought to prevail at these depths would ensure that fluids would be rich in methane instead of water. Bell suggests that melts generated in the asthenosphere, such as those eventually giving rise to kimberlite eruptions, may scavenge any water present while passing through the base of the cratonic root and transport it into the overlying shallower mantle.
These results reiterate the belief shared by many scientists that knowing how much water is present deep in terrestrial planets and moons, like Earth, Mars or the Moon, is important to understanding their dynamics and evolutionary history. |
Lisa Feldman Barret in her book, “How Emotions are Made,” discusses the importance of emotional granulation or the ability to finely separate or nuance our emotions. She describes how it is important to move beyond I feel “happy” or I feel “crummy.” By granulating our emotions anger turns out to be frustration, antagonism, irritation, hurt, low self-worth, shame, etc., or happy includes content, pleased, joyful, cheerful or blissful. Barrett argues that by expanding and using a greater emotional vocabulary, we can more finely “granulate” or feel the nuance of each of emotion. This naming and noticing emotion is the bases for managing our emotions. Barrett offers that finely granulating the nuance of each emotion makes you an emotion expert, or “emotion sommelier.” This skill grants your brain more options to deal with emotion more efficiently and in turn better tailor behaviour to the situation.
For children, it can be helpful to create an “emotion cheat sheet” that they can refer to when big emotions get in the way. Children love emojis and can even draw their own. Try to have them learn new words and expand their emotional vocabulary. |
Strategy is expressed in terms of ends, ways and means. Ends, ways, and means that lead to the achievement of the desired end state within acceptable bounds of feasibility, suitability, acceptability, and risk are valid strategies for consideration by the decision maker.
Objectives (ends) explain “what” is to be accomplished. They flow from a consideration of the interests and factors in the strategic environment affecting the achievement of the desired end state. Objectives are bounded by policy guidance, higher strategy, the nature of the strategic environment, the capabilities and limitations of the instruments of power of the state, and resources made available. Objectives are selected to create strategic effect. Strategic objectives, if accomplished, create or contribute to creation of strategic effects that lead to the achievement of the desired end state at the level of strategy being analyzed and, ultimately, serve national interests. In strategy, objectives are expressed with explicit verbs (e.g., deter war, promote regional stability, destroy Iraqi armed forces). Explicit verbs force the strategist to consider and qualify what is to be accomplished and help establish the parameters for the use of power.
Strategic concepts (ways) answer the big question of “how” the objectives are to be accomplished by the employment of the instruments of power. They link resources to the objectives by addressing who does what, where, when, how, and why, with the answers to which explaining “how” an objective will be achieved. Since concepts convey action, they often employ verbs in their construction, but are actually descriptions of “how” the objective of a strategy is to be accomplished. Strategic concepts provide direction and boundaries for subordinate strategies and planning. A strategic concept must be explicit enough to provide planning guidance to those designated to implement and resource it, but not so detailed as to eliminate creativity and initiative at subordinate strategy and planning levels. Logically, concepts become more specific at lower levels.
Resources (means) in strategy formulation set the boundaries for the types and levels of support modalities that will be made available for pursuing concepts of the strategy. In strategy, resources can tangible or intangible. Examples of the tangible include forces, people, equipment, money, and facilities. Intangible resources include things like will, courage, spirit, or intellect. Intangible resources are problematic for the strategist in that they are often immeasurable or volatile. Hence, intangible resources should always be suspect and closely examined to determine whether they are actually improperly expressed concepts or objectives. The rule of thumb to apply is that resources can usually be quantified, if only in general terms. The strategist expresses resources in terms that make clear to subordinate levels what is to be made available to support the concepts.
Validity and Risk.
Strategy has an inherent logic of suitability, feasibility, and acceptability. These would naturally be considered as the strategy is developed, but the strategy should be validated against them once it has been fully articulated. Thus, the strategist asks:
Suitability—Will the attainment of the objectives using the instruments of power in the manner stated accomplish the strategic effects desired?
Feasibility—Can the strategic concept be executed with the resources available?
Acceptability—Do the strategic effects sought justify the objectives pursued, the methods used to achieve them, and the costs in blood, treasure, and potential insecurity for the domestic and international communities? In this process, one considers intangibles such as national will, public opinion, world opinion, and actions/reactions of U.S. allies, adversaries, and other nations and actors.
The questions of suitability, feasibility, and acceptability as expressed above are really questions about the validity of the strategy, not risk. If the answer to any of the three questions is “no,” the strategy is not valid. But strategy is not a black and white world, and the strategist may find that the answer to one or more of these questions is somewhat ambiguous.
Risk is determined through assessment of the probable consequences of success and failure. It examines the strategy in its entire logic—ends, ways, and means—in the context of the strategic environment, and seeks to determine what strategic effects are created by the implementation of the strategy. It seeks to determine how the equilibrium is affected and whether the strategic environment is more or less favorable for the state as a result of the strategy. Risk is clarified by asking:
What assumptions were made in this strategy, and what is the effect if any of them is wrong?
What internal or external factors were considered in the development of the strategy? What change in regard to these factors would positively or adversely affect the success or effects of the strategy?
What flexibility or adaptability is inherent to the components of the strategy? How can the strategy be modified and at what cost?
How will other actors react to what has been attempted or achieved? How will they react to the way in which the strategy was pursued?
What is the balance between intended and unintended consequences?
How will chance or friction play in this strategy?
Assignment Writing Help
Engineering Assignment Services
Do My Assignment Help
Write My Essay Services |
Nuclear waste is normally a major environmental headache, but it could soon be a source of clean energy. Scientists have developed a method of turning that waste into batteries using diamond. If you encapsulate short-range radioactive material in a human-made diamond, you can generate a small electrical charge even as you completely block harmful radiation. While the team used a nickel isotope for its tests, it ultimately expects to do this using the carbon isotope you find in graphite blocks from nuclear power plants.
The batteries wouldn’t generate much power, but their longevity would be dictated by the life of the radiation itself. Researchers estimate that a carbon-based battery would generate 50 percent of its power in 5,730 years. Most likely, the batteries would be used in high-altitude drones, pacemakers, spacecraft and anywhere else replacing the battery is either very cumbersome or impossible. You could see interstellar probes that keep running long after they lose solar power, for example.
Any practical implementations are likely a long way off, and there are some conspicuous problems. Cost, for one. Diamond is expensive, so it might not be feasible to convert large amounts of nuclear waste into batteries. That’s assuming the technology works as well as intended, too. Still, it raises hope that the leftovers from nuclear reactors won’t just sit there posing a threat — they might actually do us some good.
By John Fingas for engadget.com | Photo Credit: Pixabay |
What is Cauliflower Ear?
What is Cauliflower Ear:
Cauliflower Ear is an ear complication when the external portion of the ear suffers an injury, blood clot, or injury. There is a collection of fluid under the perichondrium. This causes the ear to become thick and deformed. If left untreated, it can turn into a hardened, almost tissue like lump in the outer ear.
From the initial injury, the cartilage of the ear, separates from the lower tissue. The Blood supply becomes interrupted. This can lead to changes to the skin and the risk of infection increase. Over time, the tissue becomes fibrous.
The classic picture is when there is the thickening of the tissues is caused by the fibrous tissue. The external ear resembles a cauliflower – and so it is named.
Hematoma is the accumulation of blood or a pocket of blood in a certain area. Initially after injury, a hematoma forms. It isn’t until later, that the fibrous tissue develops.
Cauliflower ear can be prevented or limited by Headgear, or other protection of the ears.
If left untreated, it can become a painful injury and deformity can occur. In some cases a bacterial infection can develop and in severe cases, hearing loss can develop.
Cauliflower is Commonly seen in the following:
3.) Kick boxers or Martial Artists
4.) Rugby players
Symptoms that can be seen:
– Redness or Inflammation
– Ringing in ears
– Facial headache
– Bleeding from outer ear
– Hearing loss
Treatment of Cauliflower:
1.) Antibiotics – when necessary.
2.) Draining – when necessary
3.) Tension or Compression dressing – placed around ears by a medical provider to limit the separation of the cartilage |
Making Sense of ANOVA – Find Differences in Population Means
Three methods for dissolving a powder in water show a different time (in minutes) it takes until the powder dissolves fully. The results are summarised in Figure 1.
There is an assumption that the population means of the three methods Method 1, Method 2 and Method 3 are not all equal (i.e., at least one method is different from the others). How can we test this?
One way is to use multiple two-sample t-tests and compare Method 1 with Method 2, Method 1 with Method 3 and Method 2 with Method 3 (comparing all the pairs). But if each test is 0.05, the probability of making a Type 1 error when running three tests would increase.
A better method is ANOVA (analysis of variances), which is a statistical technique for determining the existence of differences among several population means. The technique requires the analysis of different forms of variances – hence the name. But note: ANOVA is not a test to show that variances are different (that is a different test); it is testing whether means are different.
To perform this ANOVA, the following steps must be taken:
1. Plot the Data
For any statistical application, it is essential to combine it with a graphical representation of the data. Several tools are available for this purpose. They include the popular stratified histogram, dotplot or boxplot.
The boxplot in Figure 2 shows that the dissolution time for Method 1 seems lowest and for Method 2 seems highest. However, there is a certain degree of overlap between the data sets. Therefore, based on this plot, it is risky to draw a conclusion that there is a significant (i.e. statistically proven) difference between any of these methods. A statistical test can help to calculate the risk for this decision.
2. Formulate the Hypothesis for ANOVA
In this case, the parameter of interest is an average, i.e. the null-hypothesis is
H0: μ1 = μ2 = μ3,
with all μ being the population means of the three methods to dissolve the powder.
This means, the alternative hypothesis is
HA: At least one μ is different to at least one other μ.
3. Decide on the Acceptable Risk
Since there is no reason for changing the commonly used acceptable risk of 5%, i.e. 0.05, we use this risk as our threshold for making our decision.
4. Select the Right Tool
If there is a need for comparing more than two means, the popular test for this situation is the ANOVA.
5. Test the Assumptions
Finally, the prerequisites for the ANOVA, the analysis of variances, to work properly are:
- All data sets must be normal and
- All variances must not be significantly different from each other.
Firstly, since all samples show a p-value above 0.05 (or 5 percent) for the Anderson-Darling Normality test (Figure 3), we can conclude that all samples are normally distributed. The test for normality uses the Anderson Darling test for which the null hypothesis is “Data are normally distributed” and the alternative hypothesis is “Data are not normally distributed.”
Secondly, as an alternative to perform a test for equal variances, it is appropriate to check whether the confidence intervals for sigma (95% CI Sigma) overlap. If there is a large overlap, the assumption for no significant difference between the variances is valid.
This means, both prerequisites for ANOVA are met.
6. Conduct the Test
Using the ANOVA, the analysis of variances, statistics software SigmaXL generates the output in Figure 4.
Since the p-value is 0.0223, i.e. less than 0.05 (or 5 percent), we reject H0 and accept HA.
7. Make a Decision
With this, ANOVA statistics means that there is at least one significant difference.
Additionally, this statistics shows which method is different from which other method. The p-value for pairwise comparison is 0.0066 between Method 1 and Method 2, i.e. there is a significant difference between Method 1 and Method 2. The boxplot shows the direction of this difference.
Finally, the statistics informs that this X (Methods) covers only 36% of the total variation in the methods. There might be other Xs explaining part of the rest variation of 64%.
Interested in the stats? Read here. |
|Part of a series on|
|Fresh water (< 0.05%)
Brackish water (0.05–3%)
Saline water (3–5%)
Brine (> 5%)
|Bodies of water|
|Seawater • Salt lake • Hypersaline lake • Salt pan • Brine pool • Bodies by salinity|
Salinity is the saltiness or amount of salt dissolved in a body of water (see also soil salinity). This is usually measured in (note that this is technically dimensionless). Salinity is an important factor in determining many aspects of the chemistry of natural waters and of biological processes within it, and is a thermodynamic state variable that, along with temperature and pressure, governs physical characteristics like the density and heat capacity of the water.
A contour line of constant salinity is called an isohaline, or sometimes isohale.
Salinity in rivers, lakes, and the ocean is conceptually simple, but technically challenging to define and measure precisely. Conceptually the salinity is the quantity of dissolved salt content of the water. Salts are compounds like sodium chloride, magnesium sulfate, potassium nitrate, and sodium bicarbonate which dissolve into ions. The concentration of dissolved chloride ions is sometimes referred to as chlorinity. Operationally, dissolved matter is defined as that which can pass through a very fine filter (historically a filter with a pore size of 0.45 μm, but nowadays usually 0.2 μm). Salinity can be expressed in the form of a mass fraction, i.e. the mass of the dissolved material in a unit mass of solution.
Seawater typically has a mass salinity of around 35 g/kg, although lower values are typical near coasts where rivers enter the ocean. Rivers and lakes can have a wide range of salinities, from less than 0.01 g/kg to a few g/kg, although there are many places where higher salinities are found. The Dead Sea has a salinity of more than 200 g/kg.
Whatever pore size is used in the definition, the resulting salinity value of a given sample of natural water will not vary by more than a few percent (%). Physical oceanographers working in the abyssal ocean, however, are often concerned with precision and intercomparability of measurements by different researchers, at different times, to almost five significant digits. A bottled seawater product known as IAPSO Standard Seawater is used by oceanographers to standardize their measurements with enough precision to meet this requirement.
Measurement and definition difficulties arise because natural waters contain a complex mixture of many different elements from different sources (not all from dissolved salts) in different molecular forms. The chemical properties of some of these forms depend on temperature and pressure. Many of these forms are difficult to measure with high accuracy, and in any case complete chemical analysis is not practical when analyzing multiple samples. Different practical definitions of salinity result from different attempts to account for these problems, to different levels of precision, while still remaining reasonably easy to use.
For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called solution salinity), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs.
The concentrations of dissolved gases like oxygen and nitrogen are not usually included in descriptions of salinity. However, carbon dioxide gas, which when dissolved is partially converted into carbonates and bicarbonates, is often included. Silicon in the form of silicic acid, which usually appears as a neutral molecule in the pH range of most natural waters, may also be included for some purposes (e.g., when salinity/density relationships are being investigated).
The term 'salinity' is, for oceanographers, usually associated with one of a set of specific measurement techniques. As the dominant techniques evolve, so do different descriptions of salinity. The distinctions between these different descriptions are important to physical oceanographers but are obscure and confusing to nonspecialists.
Salinities were largely measured using titration-based techniques before the 1980s. Titration with silver nitrate could be used to determine the concentration of halide ions (mainly chlorine and bromine) to give a chlorinity. The chlorinity was then multiplied by a factor to account for all other constituents. The resulting 'Knudsen salinities' are expressed in units of parts per thousand (ppt or ‰).
The use of electrical conductivity measurements to estimate the ionic content of seawater led to the development of the scale called the practical salinity scale 1978 (PSS-78). Salinities measured using PSS-78 do not have units. The suffix psu or PSU (denoting practical salinity unit) is sometimes added to PSS-78 measurement values.
In 2010 a new standard for the properties of seawater called the thermodynamic equation of seawater 2010 (TEOS-10) was introduced, advocating absolute salinity as a replacement for practical salinity, and conservative temperature as a replacement for potential temperature. This standard includes a new scale called the reference composition salinity scale. Absolute salinities on this scale are expressed as a mass fraction, in grams per kilogram of solution. Salinities on this scale are determined by combining electrical conductivity measurements with other information that can account for regional changes in the composition of seawater. They can also be determined by making direct density measurements.
A sample of seawater from most locations with a chlorinity of 19.37 ppt will have a Knudsen salinity of 35.00 ppt, a PSS-78 practical salinity of about 35.0, and a TEOS-10 absolute salinity of about 35.2 g/kg. The electrical conductivity of this water at a temperature of 15 °C is 42.9 mS/cm.
Lakes and rivers
Limnologists and chemists often define salinity in terms of mass of salt per unit volume, expressed in units of mg per litre or g per litre. It is implied, although often not stated, that this value applies accurately only at some reference temperature. Values presented in this way are typically accurate to the order of 1%. Limnologists also use electrical conductivity, or "reference conductivity", as a proxy for salinity. This measurement may be corrected for temperature effects, and is usually expressed in units of μS/cm.
A river or lake water with a salinity of around 70 mg/L will typically have a specific conductivity at 25 °C of between 80 and 130 μS/cm. The actual ratio depends on the ions present. The actual conductivity usually changes by about 2% per degree Celsius, so the measured conductivity at 5 °C might only be in the range of 50–80 μS/cm.
Direct density measurements are also used to estimate salinities, particularly in highly saline lakes. Sometimes density at a specific temperature is used as a proxy for salinity. At other times an empirical salinity/density relationship developed for a particular body of water is used to estimate the salinity of samples from a measured density.
|Fresh water||Brackish water||Saline water||Brine|
|< 0.05%||0.05 – 3%||3 – 5%||> 5%|
|< 0.5 ‰||0.5 – 30 ‰||30 – 50 ‰||> 50 ‰|
Systems of classification of water bodies based upon salinity
Marine waters are those of the ocean, another term for which is euhaline seas. The salinity of euhaline seas is 30 to 35. Brackish seas or waters have salinity in the range of 0.5 to 29 and metahaline seas from 36 to 40. These waters are all regarded as thalassic because their salinity is derived from the ocean and defined as homoiohaline if salinity does not vary much over time (essentially constant). The table on the right, modified from Por (1972), follows the "Venice system" (1959).
In contrast to homoiohaline environments are certain poikilohaline environments (which may also be thalassic) in which the salinity variation is biologically significant. Poikilohaline water salinities may range anywhere from 0.5 to greater than 300. The important characteristic is that these waters tend to vary in salinity over some biologically meaningful range seasonally or on some other roughly comparable time scale. Put simply, these are bodies of water with quite variable salinity.
Highly saline water, from which salts crystallize (or are about to), is referred to as brine.
Salinity is an ecological factor of considerable importance, influencing the types of organisms that live in a body of water. As well, salinity influences the kinds of plants that will grow either in a water body, or on land fed by a water (or by a groundwater). A plant adapted to saline conditions is called a halophyte. A halophyte which is tolerant to residual sodium carbonate salinity are called glasswort or saltwort or barilla plants. Organisms (mostly bacteria) that can live in very salty conditions are classified as extremophiles, or halophiles specifically. An organism that can withstand a wide range of salinities is euryhaline.
Salt is expensive to remove from water, and salt content is an important factor in water use (such as potability).
The degree of salinity in oceans is a driver of the world's ocean circulation, where density changes due to both salinity changes and temperature changes at the surface of the ocean produce changes in buoyancy, which cause the sinking and rising of water masses. Changes in the salinity of the oceans are thought to contribute to global changes in carbon dioxide as more saline waters are less soluble to carbon dioxide. In addition, during glacial periods, the hydrography is such that a possible cause of reduced circulation is the production of stratified oceans. Hence it is difficult in this case to subduct water through the thermohaline circulation.
- World Ocean Atlas 2009. nodc.noaa.gov
- Pawlowicz, R. (2013). "Key Physical Variables in the Ocean: Temperature, Salinity, and Density". Nature Education Knowledge. 4 (4): 13.
- Eilers, J. M.; Sullivan, T. J.; Hurley, K. C. (1990). "The most dilute lake in the world?". Hydrobiologica. 199: 1–6. doi:10.1007/BF00007827.
- Anati, D. A. (1999). "The salinity of hypersaline brines: concepts and misconceptions". Int. J. Salt Lake. Res. 8: 55–70. doi:10.1007/bf02442137.
- IOC, SCOR, and IAPSO (2010). The international thermodynamic equation of seawater – 2010: Calculation and use of thermodynamic properties. Intergovernmental Oceanographic Commission, UNESCO (English). pp. 196pp.
- Wetzel, R. G. (2001). Limnology: Lake and River Ecosystems, 3rd ed. Academic Press. ISBN 978-0-12-744760-5.
- Pawlowicz, R.; Feistel, R. (2012). "Limnological applications of the Thermodynamic Equation of Seawater 2010 (TEOS-10)". Limnology and Oceanography: Methods. 10 (11): 853–867. doi:10.4319/lom.2012.10.853.
- Unesco (1981). The Practical Salinity Scale 1978 and the International Equation of State of Seawater 1980. Tech. Pap. Mar. Sci., 36
- Unesco (1981). Background papers and supporting data on the Practical Salinity Scale 1978. Tech. Pap. Mar. Sci., 37
- Millero, F. J. (1993). "What is PSU?". Oceanography. 6 (3): 67.
- Culkin, F.; Smith, N. D. (1980). "Determination of the Concentration of Potassium Chloride Solution Having the Same Electrical Conductivity, at 15C and Infinite Frequency, as Standard Seawater of Salinity 35.0000‰ (Chlorinity 19.37394‰)". IEEE J. Oceanic Eng. OE–5 (1): 22–23. doi:10.1109/JOE.1980.1145443.
- van Niekerk, Harold; Silberbauer, Michael; Maluleke, Mmaphefo (2014). "Geographical differences in the relationship between total dissolved solids and electrical conductivity in South African rivers". Water SA. 40 (1): 133. doi:10.4314/wsa.v40i1.16.
- Por, F. D. (1972). "Hydrobiological notes on the high-salinity waters of the Sinai Peninsula". Marine Biology. 14 (2): 111. doi:10.1007/BF00373210.
- Venice system (1959). The final resolution of the symposium on the classification of brackish waters. Archo Oceanogr. Limnol., 11 (suppl): 243–248.
- Dahl, E. (1956). "Ecological salinity boundaries in poikilohaline waters". Oikos. Oikos. 7 (1): 1–21. JSTOR 3564981. doi:10.2307/3564981.
- Kalcic, Maria, Turowski, Mark; Hall, Callie. "Stennis Space Center Salinity Drifter Project. A Collaborative Project with Hancock High School, Kiln, MS". Stennis Space Center Salinity Drifter Project. NTRS. Retrieved 2011-06-16.
- Mantyla, A.W. 1987. Standard Seawater Comparisons updated. J. Phys. Ocean., 17: 543–548.
- MIT page of seawater properties, with Matlab, EES and Excel VBA library routines
- Equations and algorithms to calculate fundamental properties of sea water.
- History of the salinity determination
- Practical Salinity Scale 1978.
- Salinity calculator
- Lewis, E. L. 1982. The practical salinity scale of 1978 and its antecedents. Marine Geodesy. 5(4):350–357.
- Equations and algorithms to calculate salinity of inland waters |
A Hydrogen-Fueled Car
The use of hydrogen as a fuel in motor vehicles offers several advantages over traditional fossil fuels:
- There exists an unlimited supply of hydrogen -- hydrogen is the most abundant element in the universe and the tenth most abundant element on Earth.
- Hydrogen is renewable -- When hydrogen reacts with oxygen, the by-product is water (H2O), which can then be hydrolyzed (broken up into its component parts) to yield more hydrogen.
- Hydrogen is clean-burning -- Unlike the burning of fossil fuels, hydrogen combustion does not produce any destructive environmental pollutants.
- Hydrogen weighs less and generates more power than hydrocarbon-based fuels.
- Hydrogen burns faster (and at a lower temperature) than conventional gasoline.
But carmakers and the general public have yet to declare hydrogen power safe for consumer use. To learn more about the use of hydrogen as a fuel source and why you still can't buy a hydrogen car, see How the Hydrogen Economy Works.
In addition to running on hydrogen instead of fossil fuels, the internal components of the H2R's engine are unique in two significant ways: the hydrogen-injection valve and the materials used for the combustion chambers. In the H2R, the injection valves have been integrated into the intake manifolds, as opposed to injecting fuel directly into the combustion chambers.
Liquid hydrogen does not lubricate the way gasoline does, so the H2R uses altered valve seat rings that compensate for this. To maximize power and efficiency, hydrogen is injected into the intake manifold as late as possible, so the injection valves have been redesigned, as well.
In the next section, we'll look at the H2R fuel tank and how it gets refilled. |
Introduce the geology of Wulong Karst, Chongqing, China follow by question:
What is the geology of the area? How and when did the land area form? How has the geology of the area shaped the communities of the region? Be specific.You should write as though a member of your family or a friend is the target audiencedont use technical terms or jargon without clarifying them. Your paper neednt be comprehensivefeel free to focus on fewer, more interesting aspects rather than trying to include every geological detail.5 pages. |
Best Activities to Playfully Encourage Pre-Writing Skills
Handwriting is a skill that is developed over time. To master handwriting children need to combine fine motor skills, language, memory and concentration.
Pre-writing skills are fundamental in developing the ability to write. Acquiring these skills will contribute to your child’s ability to hold and use a pencil, write, draw and colour.
One of the best ways to encourage these skills is, of course, through PLAY!
When a child uses their hands while playing it is actually helping to develop strength and control of their hand muscles (fingers and wrists), fine motor skills and hand/eye coordination – skills which are needed for more complex tasks like writing with a pencil or using a keyboard.
So what are the best play activities we can do with our little ones?
We’ve listed some fun activities below that will encourage the development of pre-writing skills:
- Manipulation activities: playdough and slime are some of the best activities to do with little ones. Practice rolling playdough into “sausages” or little balls or patting into flat pancakes. Add sticks, popsticks or macaroni to be pushed into the dough.
- Sensory play: playing with rice, painting or getting stuck into messy play is great for developing tactile awareness
- Mark Making: provide lots of opportunities using a variety of drawing and writing implements such as pencils, crayons, chalk, paint brushes, scribbling and drawing etc. Also simple things like encouraging children to make their mark on birthday cards is great.
- Playfully practice drawing different lines and shapes together. Model where to start e.g. vertical lines start at the top, go down, and stop; horizontal lines – start at the left and go to the right; circles – start at the top and go around until it meets back at the top. Practice playing with these shapes makes it easier for children to transition into writing the letters of the alphabet.
- Threading: pasta onto pipe cleaners, beads onto shoe laces etc.
- Cutting: provide child-safe scissors and lots of old magazines, when the cutting skill is more developed, play games such as cutting pictures in half, cutting along lines or cutting something into smaller pieces. (Don’t forget to supervise your little ones with scissors as you might have a few things cut that you didn’t want to!)
- Gluing: use brushes (bigger ones for smaller children and smaller ones for older children) to glue shapes such as pasta, pictures or beads
- Rice play: is a great way for children to playfully develop the fine-motor skills required for writing i.e. picking up tiny grains, passing handfuls from hand to hand, or scooping and pouring.
- Puzzles: completing visual puzzles and matching games helps develop essential visual perceptual skills that are required for reading, handwriting and other important daily activities later on.
- Musical instruments: grasping things to shake or bang encourages your child’s fine motor skills and coordination.
Remember, the more your child regularly plays the more they are learning!
If you enjoyed this, then you may want to read our tips on Encouraging Oral Language Development and Encouraging Early Literacy.
Zero To Three
Raising Children Network |
The TV show Cheers was set in a bar "where everybody knows your name." Global knowledge of a name is appealing for a neighborhood pub, but not for a programming language. Most programming languages enable you to define functions that have local variables: variables whose names are known only inside the function. This article describes local and global variables in the SAS/IML language.
Scope: the environment where everybody knows your name
One of the features of the SAS/IML language is that you can create your own user-defined functions or subroutines that extend the capabilities of SAS. For brevity, this article discusses functions, but the same ideas apply to user-defined subroutines.
In computer science, the scope of a variable is the "context... in which a variable name... is valid and can be used." A variable inside a function usually has local scope, which means that the variable’s name is known inside the function, but not outside. Furthermore, modifying a local variable does not affect any variable outside the function. For example, the following SAS/IML program defines a variable y inside a function. The local variable is created when the function executes and vanishes when the function exits. Although a variable outside the function is also named y, the outer variable is not affected by running the function:
proc iml; start F1(x); /* a function with local variables */ y = 2*x; /* y is local */ print y[label="y inside function (local)"]; return(1); finish; y = 0; t=1:5; v = F1(t); print y[label="y outside function"];
The scope of the variables is shown in the following diagram. Three variables are known to the main program: y, t, and v. Inside the function, two names are known: x and y. The local variable named y is not related to the variable y in the main scope. They have the same name, but their scope is different.
There are two ways to enable a variable inside a function to affect variables outside the function.
- Because the SAS/IML language passes arguments by reference, a function that changes the value of an argument also affects the corresponding matrix that was passed into the module.
- You can use the GLOBAL clause of the START statement to specify variables whose names are known at main scope.
Sometimes SAS/IML users ask if there is a third alternative. Programmers sometimes ask whether it is possible to create a variable that is shared between several functions, but is not global to the entire program. The answer is no. The SAS/IML language does not support a namespace or variables that are global to a namespace.
Parent variables: sharing the memory, but not the name
Because the SAS/IML language passes values by reference, modifying one of the function's arguments changes the value of the matrix that was passed in. The following example illustrates this:
start F2(x); /* a function that modifies its argument */ x = 2*x; /* x is an argument */ print x[label="x inside module (argument)"]; return(2); finish; y = 0; t=1:5; v = F2(t); print t[label="t at main scope"];
Notice that the variable t at main scope has a different name from the parameter x inside the module, but both variables share the same memory. Because x is an argument, changing x inside the function also changes t. See my previous article for more details about passing by reference. The behavior of the F2 function is summarized in the following diagram.
Global variables: sharing the memory and the name
A variable has global scope if you include it in the GLOBAL clause of the START statement. A variable that has global scope can be read or modified inside the module. It corresponds to a variable of the same name that exists at main scope. The following example illustrates this:
start F3(x) GLOBAL(y); /* a function that has a global variable */ y = 2*x; /* y is global */ print y[label="y inside module (global)"]; return(3); finish; y = 0; t=1:5; v = F3(t); print y[label="y at main scope"];
In this example, the y vector is changed from within the F3 function because y is declared to be a global variable. The following diagram illustrates the behavior of the F3 function.
The role of global variables in SAS/IML programs
Although global variables are discouraged in computer science courses, they serve an important purpose in SAS/IML programming. Namely, when you write an optimization program in the SAS/IML language, the function that is optimized (called the objective function) must contain only one argument. The argument vector is modified until the objective function reaches an optimal value. Any parameters to the objective function must be specified as global variables. The global variables are parameters that are held constant during the optimization process. |
Climate change will impact directly and, through land-use change, indirectly on a wide range of soil processes and properties that will determine the future ability of land to fulfill key functions that are important for all terrestrial ecosystems, as well as several socioeconomic activities, that underpin the well-being of society. The following subsections summarize key effects. Further information is summarized in the section on soils in the European ACACIA report (Rounsevell and Imeson, 2000).
Soil water contents respond rapidly to variability in the amounts and distribution of precipitation or the addition of irrigation. Temperature changes affect soil water by influencing evapotranspiration, and plant water use is further influenced by elevated CO2 concentrations, leading to lower stomatal conductance and increased leaf photosynthetic rates (Kirschbaum et al., 1996). Soil water contents are highly variable in space (Rounsevell et al., 1999), so it is difficult to generalize about specific climate impacts.
Climate change can be expected to modify soil structure through the physical processes of shrink-swell (caused by wetting and drying) and freeze-thaw, as well as through changes in soil organic matter (SOM) contents (Carter and Stewart, 1996). Compaction of soils results from inappropriate timing of tillage operations during periods when the soil is too wet to be workable. Soil workability has a strong influence on the distribution and management of arable crops in temperate parts of Europe (Rounsevell, 1993; Rounsevell and Jones, 1993). Therefore, wet areas with heavy soils could benefit from climate change (MacDonald et al., 1994; Rounsevell and Brignall, 1994). In a similar way, grassland systems can suffer poaching by grazing livestock (i.e., damage caused by animal hooves) (Harrod, 1979). Thus, drier soil conditions for longer periods of the year would affect the distribution of intensive agricultural grassland in temperate Europe (Rounsevell et al., 1996a) and may result in intensification of currently wet upland grazing areas.
Soils with large clay contents shrink as they dry and swell when they become wet again, forming large cracks and fissures. Drier climatic conditions will increase the frequency and size of crack formation in soils, especially those in temperate regions of Europe, which currently do not reach their full shrinkage potential (Climate Change Impacts Review Group, 1991, 1996). Soils that shrink and swell cause damage to building foundations through subsidence, creating a problem for householders and the housing insurance industry (Building Research Establishment, 1990). Crack formation also results in more rapid and direct movement of water and solutes from surface soil to permeable substrata or drainage installations through bypass flow (Armstrong et al., 1994; Flurry et al., 1994). This will decrease the filtering function of soil and increase the possibility of nutrient losses and water pollution (Rounsevell et al., 1999).
|Table 13-2: Key impacts of climate change on European water resources.|
|Public water supply||- Reduction in reliability of direct river abstractions - Change in reservoir reliability (dependent on seasonal change in flows) - Reduction in reliability of water distribution network||Kaczmarek et al. (1996), Dvorak et al. (1997)|
|Demand for public water supplies||- Increasing domestic demand for washing and out-of-house use||Herrington (1996)|
|Water for irrigation||- Increasing demand - Reduced availability of summer water - Reduced reliability of reservoir systems||Kos (1993), Alexandrov (1998)|
|Power generation||- Change in hydropower potential through the year - Altered potential for run-of-river power - Reduced availability of cooling water in summer||Grabs (1997), Sælthun et al. (1998)|
|Navigation||- Change (reduction?) in navigation opportunities along major rivers||Grabs (1997)|
|Pollution risk and control||- Increased risk of pollution as a result of altered sensitivity of river system||Mänder and Kull (1998)|
|Flood risk||- Increased risk of loss and damage - Increased urban flooding from overflow of storm drains||Grabs (1997)|
|Environmental impacts||- Change in river and wetland habitats|
Accumulation of salts in soils (salinization) results from capillary movement and dispersion of saline water because evapotranspiration is greater than precipitation and irrigation (Vàrallyay, 1994). Such conditions, which are widespread throughout the warmer and drier regions of southern Europe, will be exacerbated by temperature rise coupled with reduced rainfall. Climate change also will increase flood incidence and salinity along coastal regions, through the influence of sea-level rise (Nicholls, 2000).
A decrease in precipitation and/or increase in temperature increases oxidation and loss of volume in lowland peat soils that are used for agriculture. It has been suggested that under climate change, the volume of peats in agricultural use will shrink by 40% (Kuntze, 1993). Some peat soils in western Europe are associated with acid sulfate conditions (Dent, 1986); strong acidity largely precludes agricultural use (Beek et al., 1980). Soil acidification also can result from depletion of basic cations through leaching (Brinkman, 1990) where the soil is well drained and structurally stable and experiences high rainfall amounts and intensity-as in many upland areas of Europe. In wetter climate, soil acidification could increase if buffering pools become exhausted, although for most soils this will take a very long time.
Climate change is likely to increase wind and water erosion rates (Rosenberg and Tutwiler, 1988; Dregne, 1990; Botterweg, 1994), especially where the frequency and intensity of precipitation events grows (Phillips et al., 1993). Erosion rates also will be affected by climate-induced changes in land use (Boardman et al., 1990) and soil organic carbon contents (Bullock et al., 1996). Relatively small changes in climate may push many Mediterranean areas into a more arid and eroded landscape (Lavee et al., 1998) featuring decreases in organic matter content, aggregate size, and stability and increases in sodium adsorption ratio and runoff coefficient. However, increased erosion in response to climate change cannot be assumed for all parts of Europe. For example, in upland grazed areas, erosion rates will be reduced as a result of better soil surface cover and topsoil stability arising from higher temperatures that extend the duration of the growing season and reduce the number of frosts (Boardman et al., 1990).
Climate change will impact directly on SOM through temperature and precipitation (Tinker and Ineson, 1990; Cole et al., 1993; Pregitzer and Atkinson, 1993) and indirectly (and possibly more importantly) through changing land use (e.g., Hall and Scurlock, 1991). SOM contents increase with soil water content and decrease with temperature (Post et al., 1982, 1985; Robinson et al., 1995), although rates of decomposition vary widely between different soil carbon pools (van Veen and Paul, 1981; Parton et al., 1987; Jenkinson, 1990). Changes in SOM contents depend on the balance between carbon inputs from vegetation and carbon losses through decomposition (Lloyd and Taylor, 1994); most SOM is respired by soil organisms within a few years. Net primary productivity (NPP) usually increases with increasing temperature and elevated atmospheric CO2, leading to greater returns of carbon to soils (Loiseau et al., 1994). However, increasing temperature strongly stimulates decomposition (Berg et al., 1993; Lloyd and Taylor, 1994; Kirschbaum, 1995) at rates that are likely to outstrip NPP and lead to reduced SOM contents (Kirshbaum et al., 1996). This effect will be strongest in cooler regions of Europe, where decomposition rates currently are slow (Jenny, 1980; Post et al., 1982; Kirschbaum, 1995). Conversely, excess soil water resulting from increased precipitation will reduce decomposition rates (Kirschbaum, 1995) and thus increase SOM contents.
Plant growth and soil water use are strongly influenced by the availability of nutrients. Where climatic conditions are favorable for plant growth, the shortage of soil nutrients will have a more pronounced effect (Shaver et al., 1992). Increased plant growth in a CO2-enriched atmosphere may rapidly deplete soil nutrients; consequently, the positive effects of CO2 increase may not persist as soil fertility decreases (Bhattacharya and Geyer, 1993). Increased SOM turnover rates over the long term are likely to cause a decline in soil organic nitrogen in temperate European arable systems (Bradbury and Powlson, 1994), although, in the short term, increased returns of carbon to soils would maintain soil organic nitrogen contents (Pregitzer and Atkinson, 1993; Bradbury and Powlson, 1994). Greater mineralization may cause an increase in nitrogen losses from the soil profile (e.g., Kolb and Rehfuess, 1997; Lukewille and Wright, 1997), although there is evidence to suggest that temperature-driven, increased nitrogen uptake by vegetation may reduce these losses (Ineson et al., 1998).
There is great uncertainty surrounding the response of soil community function to global change and the potential effects of these responses at the ecosystem level (Smith et al., 1998). Most soil biota have relatively large temperature optima and therefore are unlikely to be adversely affected by climate change (Tinker and Ineson, 1990), although some evidence exists to support changes in the balance between soil functional types (Swift et al., 1998). Soil organisms will be affected by elevated atmospheric CO2 concentrations where this changes litter supply to and fine roots in soils, as well as by changes in the soil moisture regime (Rounsevell et al., 1996b). Furthermore, the distribution of individual species of soil biota will be affected by climate change where species are associated with specific vegetation and are unable to adapt at the rate of land-cover change (Kirschbaum et al., 1996).
Other reports in this collection |
A useful way to enable children to show their understanding in science is to use Clicker Talk Sets. A set to look at on LearningGrids is Reversible and Irreversible Changes – Talk, which looks at six materials and how they might be changed. Students consider whether the changes shown are permanent, or whether the materials can be returned to their original states.
The activity is open-ended – some children might describe what they can see in the pictures while others might give detailed interpretations of what has happened. Some may recognize, for example, that the candle represents both a reversible and an irreversible change. As such, the activity is suitable for a range of abilities.
Having recorded their ideas, children could use the set as a prompt for writing.
For other Talk Sets, use the quick search on LearningGrids.
To view all of the science sets available on LearningGrids, click on Science in the Categories section. |
One part of the challenge of distributed energy grids is the ability to store excess energy. For example solar panels can only produce energy during day time but where does the energy come from at night? We have to either use other energy sources at night or discharge some form of battery.
All forms of energy are either potential energy (e.g. heat up water) or kinetic energy (e.g. spin a fly-wheel). Batteries are a way to store potential energy. Unfortunately are common batteries not very high in energy density. Energy density is the amount of energy that can be stored in a given volume or mass.
Here is a list of energy densities of common energy storage materials
- Hydrogene: 33.3 kWh/kg
- Natural gas: 13.9 kWh/kg
- Fuel: 12.7 kWh/kg
- Lithium-ion battery: 0.22 kWh/kg
- Lead-acid battery: 0.05 kWh/kg
To give you a more practical example of what this list means: to light up one 100W bulb for an hour you would roughly need:
- 3g (0.1oz) Hydrogene
- 7g (0.25oz) Natural gas
- 7g (0.25oz) Fuel
- 0.45kg (1 pound) Lithium-ion battery
- 2kg (4 pounds) Lead-acid battery
It seems obvious that hydrogen is the appropriate way to overcome the energy storage problem. There are two reason:
- hydrogen is available everywhere (can be made from water by using electrolysis)
- conversion from hydrogen back to energy just produces water and heat as the only byproducts
There is some momentum building up on the battery market and already Apple announced plans for a hydrogen Fuel Cell Battery. Also electronic giant Samsung is is putting in efforts to bring this technology to the market. I have high hopes that the auto industry will be one of the driving forces to mature this technology. We need hydrogen batteries to build the energy grid of the future. |
Native to northern climates in Asia, crab apple trees are adapted to and thrive in cold weather, and they require only full sun and well-drained soil to grow well. Although these trees require some water, they can survive with little water during dry weather. Depending on their size, they can reach up to 25 feet tall, and they produce showy orange or pink blossoms.Continue Reading
Because crab apple trees can grow up to 25 feet tall, plant them in a location where they have room to spread out and grow as well as have access to full sun. Since they thrive in well-drained soil, amend less than ideal soil with sand or compost to increase the drainage. Additionally, using compost adds nutrients to the soil. If the tree doesn't grow more than 5 inches per year, dress the soil with organic matter or a high-nitrogen fertilizer to increase soil richness.
After planting the tree, add 2 inches of mulch, and give it 1 inch of water each week until it's established. After it matures, it needs watering only during drought conditions. Regularly snip off any watersprouts and suckers, and remove any damaged, diseased or dead limbs. Keep an eye out for apple scab or frog-eye leaf spot, which are two fungal diseases common to crab apple trees.Learn more about Trees & Bushes |
United States Geological Survey
U.S. Geological Survey
U.S. Geological Survey
Established as part of the Department of the Interior in 1879 and funded by Congress, the U.S. Geological Survey (USGS) provides support to federal agencies (e.g., the Environmental Protection Agency or EPA, the National Oceanographic and Atmospheric Administration or NOAA, and the U.S. Coast Guard) in the form of useful information for decision-making purposes concerning the management of U.S. environmental and natural resources. As part of this support, the USGS examines the relationship between humans and the environment by conducting data collection, long-term research assessments, and ecosystem analyses, and providing forecast changes and their implications. One example of this support is the provision of information about earthquake and seismic activities that is used to assess the potential impact of such activities on water quality. In addition to its federal agency support, the USGS also manages some of the following programs that address the problems of environmental pollution: (1) coastal and marine geology program; (2) contaminants program; (3) energy program; (4) fisheries and aquatic resources; and (5) global change/wetland ecology program. These external support activities and internal programs have been similarly adopted by countries such as Australia, Britain, Finland, and Japan, although not to the same degree as provided by the USGS.
natural research council, committee on geosciences, environment and resources. (2001). future roles and opportunities for the u.s. geological survey. washington, d.c.: national academy press.
coastal and marine geology program site. available from http://marine.usgs.gov.
Robert F. Gruenig
Geological Survey, United States
United States Geological Survey, bureau organized in 1879 under the Dept. of the Interior to unify and centralize the work already undertaken by separate surveys under Clarence King, F. V. Hayden, George W. Wheeler, and J. W. Powell. The functions of the bureau cover the exploration of the country to gather information as to geological structure; the preparation of geological and topographical maps of all parts of the country; the examination and assessment of natural resources; the study of problems of irrigation and water power; the classification of public lands; the investigation of natural disasters; the monitoring of global environment change, and the annual publication of papers, bulletins, and maps based upon surveys made. In 1962 the bureau was authorized to conduct surveys outside the public domain. The Geological Survey is also responsible for directing the National Geologic Mapping Program, using the most sophisticated of cartographic equipment for researching and compiling data. |
Don’t want to explain your vomiting and diarrhoea to your school when you call in sick? Nobody does. People make jokes when you have gastro – it’s the perfect ‘sickie’ excuse.
But for those suffering, it’s far from funny – especially when you throw in fever, headache, abdominal cramps and muscle aches. And the resulting dehydration can be serious – or even fatal – for young children and the elderly.
Gastroenteritis is always present in communities, and with careful hygiene, you can avoid an outbreak in your school community and workplace.
The outbreaks are mostly caused by infection with viruses, usually norovirus or rotavirus.
If a child vomits or has a bit of diarrhoea and they have not completely washed their hands, the virus can be transmitted to someone else’s mouth. This could be through person to person contact, such as shaking hands or by touching contaminated objects, like a door handle.
But extra vigilance is required to stop noroviruses because they can spread through the air in droplets, causing much wider contamination of surfaces. The airborne virus may also reach the gut by being inhaled into the back of the throat.
It’s not a classic aerosol spread like you see with respiratory viruses. You probably need to be in quite close proximity like a parent or teacher cleaning up a child’s diarrhoea or vomit. Wearing a mask may be helpful in these circumstances.
Stop the spread
Wash hands thoroughly with soap and running water for before handling and eating food, and always wash your hands after using the toilet.” (Tip: Suggest children sing the entire “Happy Birthday” song before they stop handwashing.)
It is vital that if you or your family contract gastroenteritis, you stay home from work or keep a child home from school if they are sick.
You should also avoid visiting settings where people are especially vulnerable such as hospitals or aged care facilities.
The symptoms can take between one and three days to develop and usually last another one or two days, sometimes longer.
In most cases, spread occurs from a person who has symptoms. But some people can pass on the infection without symptoms, particularly in the first 48 hours after recovery, or in the window between them becoming infected and developing symptoms.
If you are a parent with a young child in a childcare centre where other children have had gastro, be proactive. Be extra careful with your own child that they’re not sharing spoons and food with others because they may well be harbouring the virus.
Outbreaks occur every two to three years when viruses mutate in ways that make them better able to infect people.
The good news is if you’ve been laid low this year, you might have a breather the next few years.
If your school or workplace has a gastro outbreak call iClean on 1300 763 356 to disinfect, clean and freshen up your office or classroom. iClean are experienced in ensuring your learning and working environment is clinically clean which ensures staff and children are safe to return. |
In 1764 Northampton there were at least seven people held in slavery by prominent English colonists. Those enslaved, along with three other people, were listed as being “negroes” in that year’s King George III’s Census, according to historian James Trumbull in his two volume History of Northampton (published 1898 and 1902.) The names of the enslavers, but not the enslaved, were included. It is likely that they were from, or descendants of those from, West Africa, abducted, sold, and exported as merchandise to be resold in the British Colonies.
Three sentences about these ten Black people are the only acknowledgement in Trumbull’s 1300 page History of Northampton that the settlement practiced slavery. Aside from a few additional notes on two of these people, James Trumbull and subsequent town historians omitted mentioning this past as the institution of slavery became unpopular in this region. Robert Romer found this practice of historical erasure common in most of the local area, calling it deliberate amnesia in his history, Slavery in the Connecticut Valley of Massachusetts (2009).. It would be another one hundred years before historians began to rectify this lapse in memory.
The scope of this deliberate amnesia became most readily apparent to me when, this past year, I turned to Massachusetts: a Concise History by Richard D. Brown and Jack Tager. The revised edition was published in 2000 by the University of Massachusetts Press, Amherst. The history includes no mention of slavery until the narrative reaches the 1800s and the developing slavery abolition movement. The false impression given is that slavery was never practiced in Massachusetts. Many, now startling to me, facts about this past have been deliberately left out of historical accounts and, thus, general education.
Of the thirteen British colonies, Massachusetts Bay that was the first to explicitly legalize slavery. In 1641, the General Court added an article to the Body of Liberties addressing “bond slaverie.”
Article 91 legally formalized the status of enslaved Africans who, as early as 1638, had begun to be imported and sold in Massachusetts. It also formalized the status of Indigenous people who were captured by the colonists, “taken in just wars,” and as “captives” forced to serve in English households or exported for sale as enslaved. [That is another story for later post.] These actions and the English colonists’ rationale are included in a very recently published history of New England slavery, Jared Ross Hardesty’s Black Lives, Native Lands, White Worlds (2019). Published by the University of Massachusetts Press, Amherst, it begins to fill an egregious gap in their sponsored literature.
During the colonial era, numerous additional laws were passed to control those enslaved in Massachusetts: ensuring that the children of slave women were also enslaved, regulating movement and marriage among slaves, and prohibiting black males from having sex with English women. Massachusetts Bay colony residents increasingly bought slaves, “servants for life.” Some enriched themselves as Boston, an Atlantic port, became one of the largest centers of that trade in the enslaved.
As British colonization spread from the coast up the Connecticut River Valley, so too did slavery. The English first planted themselves in Springfield, founded as a trading post in 1636 by William Pynchon. Though he is not known to have enslaved Africans, Pynchon brought with him an indentured servant Peter Swinck. According to Joseph Carvalho’s history Black Families in Hampden County, Massachusetts (2010) Swinck is the first African American known to have lived in Western Massachusetts. He later was indentured to William’s son and successor John Pynchon. The first record of enslavement in WMass is a note dated 1657 that John had paid a man for “bringing up [the Connecticut River] my negroes.” He is also known to have enslaved at least five Africans between 1680 and 1700. Springfield may have been the first Valley center of trading in the enslaved, with recorded sales made in the 1720s within Springfield and up the river valley.
The earliest mention of enslaved people in Northampton by James Trumbull concerns someone from outside of the settlement, related by him in the segment “Burning of William Clarke’s House:”
” On the night of July 14, 1681, Lt. Clarke’s house caught fire with Clarke, his wife, and grandson within. The tradition is that the door was locked from outside before the fire was set in revenge for some perceived mistreatment inflicted on the arson by Clarke. The residents escaped the log house with effort before an explosion of some combustible blew off the roof tree.”
“Jack, a slave run away from his Wethersfield CT owner was later caught and accused of the arson. He confessed to starting the fire, but claimed it began accidentally when he was inside searching for food by the light of a pine torch. Jack was taken to Boston for trial in the Superior Court. A jury found him guilty and sentenced him to be ‘hanged by the neck till he be dead and then taken down and be burnt to ashes in the fire with Maria, the negro’.”
“Maria was under sentence for burning the houses of [her enslaver and his brother-in-law] in Roxbury. She was burned alive [at the stake]. Both of these negroes were slaves. Why the body of Jack was burned is not known.”
In a footnote, Trumbull adds, “Many slaves were burned alive in New York and New Jersey, and in the southern colonies, but few in Massachusetts.”
Wendy Warren, in her history of slavery and colonization in early America, New England Bound (2016),researched this Sep. 22, 1681 execution by burning, which was the first in New England. From the trial transcript, she was able to add some details to Jack’s story. He testified that he “came from Wethersfield and is Run away from Mr. Samuel Wolcott because he always beates him sometimes with 100 blows so that he hath told his master that he would sometime or other hang himself.” Jack told the court he had been on the run for a week and a half before his capture [somewhere in Hampshire County] by a miller he was trying to rob. A Springfield court sentenced Jack to prison, but he escaped. Thirteen days later he set fire to the house in Northampton.
Earlier, in 1652, because of fires set by “Indians and Negroes,” Massachusetts had passed a law making arson a capital crime, punishable by death. Warren posits that arson was perceived as a particular threat after the conflagrations of the [Pequot] War, and the example of two major uprisings–a servants’ rebellion in Virginia and a slaves’ rebellion on Barbados. Warren wonders if the severity of the form of execution came from the English colonists’ generalized fear of conspiracy and a need to terrify any others who might imitate Maria’s actions. Jack’s body being added to her consuming fire may have been a symbolic linking of their state of enslavement and their crimes.
The challenges of controlling the lives of the enslaved didn’t discourage ownership in Massachusetts. Warren points out that, given large-scale plantation slavery never developed in New England, chattel slavery might be seen as a vanity project of the very wealthy. One such example is merchant John Pynchon of Springfield, who benefited from the increase in West Indies trade. What the wealthy had, the less wealthy also wanted. The population of enslaved Africans in Massachusetts, including the Valley, grew over the next century as increasing numbers of the more well-to-do colonists owned at least one.
From Pynchon’s Springfield, colonial plantations spread up the Connecticut River. Northampton was founded in 1654, Hadley in 1659, Deerfield in 1670, and Northfield in 1673. Over the next century, slave ownership also spread up the Valley. In the Provincial Enumeration in 1754-55 of Negro Slaves Sixteen Years or Older, a total of 75 were counted in Hampshire County, which then meant all of Western Massachusetts. Scholar Robert Romer notes, however, that numbers for more than half the Valley settlements were lost or never tabulated. Those missing numbers included Deerfield, where he discovered at least 25 slaves were owned. Numbers are not provided for Northampton, either.
Romer was surprised to discover that a significant number of ministers in the Valley owned slaves. In account books, estate papers and Probate court records, he found evidence to list slave-holding religious leaders in at least seventeen communities. Jonathan Edwards, who led Northampton’s Congregation from 1729 to 1750, is among them. In 1731, Edwards went to Newport, Rhode Island to purchase a “Negro Girle named Venus” for eighty pounds. He bought other slaves during his Northampton ministry, owned a slave called Rose when he left for Stockbridge in 1750, and “a negro boy named Titus,” at the time of his death in 1758.
Edwards’ enslavement of others has been most thoroughly researched by scholar Kenneth Minkema. In his paper on “… Edwards’ Defense of Slavery,” he notes “that within Northampton, a small but growing number of elites typically owned one or two slaves— a female for domestic chores and a male for fieldwork—and Edwards was willing to commit a substantial part of his annual salary to establish his membership in this select group.” He mentions that these elite included prominent merchants, politicians, and militia officers, among them John Stoddard, Maj. Ebenezer Pomeroy, and Col. Timothy Wright.
Puritans in Massachusetts regarded themselves as God’s Elect, and so they had no difficulty with slavery, which had the sanction of the Law of the God of Israel. The Calvinist doctrine of predestination easily supported Puritans in a position that Blacks were a people cursed and condemned by God to serve whites. Jonathan Edwards subscribed to this thinking and defended another minister in the Valley criticized by his congregation for, among other things, owning slaves.
Lorenzo Greene’s the Negro in Colonial New England is by cited Douglass Harper for his early (1942) establishment of factual information, including population. The number of blacks in Massachusetts increased ten-fold between 1676 and 1720, from 200 to 2000.The population then doubled from 2,600 in 1735 to 5,235 in 1764, by which time blacks, not all of whom were slaves, had become approximately 2.2 percent of the total Massachusetts population. They were generally concentrated in the industrial and coastal towns, with Boston in 1752 having the highest concentration at 10%.
The black population of colonial Western Massachusetts was slower to grow than the eastern part of the colony, as well as being in smaller numbers, but also shows a pattern of increase. According to William Piersen’s Black Yankees tabulation the Western counties’ black population grew from 74 [under]counted in 1754 to 5,983 in 1790. This also reflected an increasing percentage of Massachusetts total black population, from 3 to 12%. Though Northampton numbers weren’t included in the 1755 Enumeration of Slaves, the fact of Jonathan Edwards’ being an enslaver suggests there were others owned and uncounted in the plantation at that time. That the numbers increased in subsequent years is suggested by the findings reported by Trumbull.
These are the few sentences in James Trumbull’s History of Northampton acknowledging that the settlement practiced slavery. In his summary of the 1764 King’s Census results, Trumbull writes:
“In addition [to a population of 1,274 whites] there were ten negroes, five males and five females. Apparently they were nearly all slaves, and were distributed in the following families: Mrs. Prudence Stoddard, widow of Col. John, one female; Lieut. Caleb Strong, one male; Joseph and Jonathan Clapp, one each; Joseph Hunt one of each sex. There was one negro at Moses Kingsley’s, not a slave, another at Zadoc Danks, and Bathsheba Hull was then living near South Street bridge. [Author’s note: the arithmetic is dodgy.] “
Little is known about these few identified Africans. A Trumbull footnote later in his history adds, “…before the Revolution, Midah, a negro employed in the tannery of Caleb Strong Sr., was the principal fiddler in town.” In another passage, he describes Bathsheba Hull in 1765 as “a negress, widow of Amos Hull, and occupied a small house on the Island near South Street Bridge, formed by the Mill Trench.” The town claimed the land had been illegally squatted on and wished to evict her. Two years later, they had apparently bought her out and paid to move her to lower Pleasant Street.
Bathsheba Hull is also mentioned in Mr. and Mrs. Prince, the acclaimed history by Gretchen Holbrook Gerzina. Abijah Prince, recently freed in Northfield, lived in Northampton from 1752 to 1754. He stayed with Amos and Bathsheba Hull, suggesting a circle of acquaintance among free Black former slaves in the Valley, some of whom owned property. Gerzina writes that Bijah (short for Abijah) worked for the hatter and church deacon Ebenezer Hunt, and that Amos, a freed former slave, was the servant of Hunt’s brother. As the Hulls were starting a family, they rented a farm. There is a gap in the story, with no explanation of how or when Amos died, or how the widow wound up living by the South Street bridge, or where or how many children they had. Later in the time Gerzina finds Bathsheba and children living in Stockbridge.
Gerzina observes that while slavery in the North may have been less violent than that of the South, it was still slavery. Children and parents were sold away from each other, and freedom only came when a white person granted it. They, like their southern counterparts, ran away in large numbers, worked in the fields and houses of their owners, and were hired out without receiving any pay. Unlike southern slaves, however, they traveled between towns easily, could marry, learned to read, and had to attend church. In eighteenth century rural New England, the enslaved carried arms for hunting, military service, and protection. Still, Gerzina found, there were also suicides.
People had their names taken from them when they were enslaved. The Africans were given names by their new owners, often biblical or mythological, and rarely with a surname except as an indication of race or status. This stripping of personal identity continued beyond their deaths as historians have had to struggle to find the record of their lives. Abijah Negro is listed on the Northfield poll tax, and Abijah Prince is also in records as, after his manumission, Abijah Freeman.
In addition to those in Northampton identified by Trumbull and Minkema, Robert Romer’s research added another seven names, for a total of at least fifteen slaveholders in Northampton. We do not know yet how many more there were or who the enslaved people were.
We also don’t know how those enslaved made the transition to being free women and men. In 1780, when the Massachusetts Constitution went into effect, slavery was still legal. Over the next several years, Freedom suits brought to court by those enslaved established that slavery wasn’t compatible with the new Constitution that declared “all men are born free and equal.” By the first federal census, which was in 1790, no one in Massachusetts was willing to go on record as still owning slaves. One assumes they all had either been sold out of state, freed to set up their own lives, or continued as hired-for-wages workers. This change in status is a whole other story. The data for Northampton is missing, however, so we can only guess that there were some – as the census put it — “all other [than white] free people” living here.
Historic Northampton is presently engaged in a long-term research project to identify and learn more about the lives of enslaved people in Northampton. Emma Winter Zeig is leading the project and, with interns, is systematically combing through all public and related private records. About thirty five enslaved people have been identified so far. Searches of the slaveholding family papers for any details are underway. Emma reports that the research is most complicated by lack of records. The researchers have been able to establish some surnames, identified familial relationships, and found more documentation of networks of relationship between people of color in Northampton. Some of their favorite finds have been the few sources that shed light on the daily lives of enslaved people_what Amos Hull was asked at catechism, for example. The project’s long term goal is to link with similar information from other towns to create a region-wide picture.
Two stones , side by side, mark the graves of two black women in the Bridge Street Cemetery in Northampton. They read:
“SYLVA CHURCH b. 1756 d. April 12, 1822 Sacred to the Memory of Sylva Church A Coloured woman, who for many years lived in the family of N. Storrs, died 12 April, 1822, Very few possessed more good qualities than she did. She was for many years a member of Mr. Williams’ Church, and we trust lived agreeable to her profession, and is now inheriting the promises.“
“SARAH GRAY b. 1808 d. 1831 In memory of SARAH GRAY a coloured woman, By those who experienced her faithful services She died Oct. 7, 1831 Aged 23
Northampton author Susan Stinson’s novel Spider in the Tree is a fictional account of Reverend Jonathan Edwards that includes his slaveholding. Susan has given tours of the Bridge Street Cemetery. On one such occasion, she was approached by Frank Carbin, whose sister came on the tour one year, and pointed Susan to a reference that confirmed that Sylva Church was enslaved in the household of Jonathan Edwards’ daughter and later granddaughter.
“There was a slave woman, “Lil”, as she was called, or Sylvia Church (her true name), who was too important a character in the household of Major Dwight and his widow, not to deserve at least a brief remembrance. She was bought on Long Island, when but 9 years old, and lived to advanced years, dying April 12, 1822, being, as is supposed, at that time, 66 years old. The last 15 years of her life she spent with Mrs. Storrs, dau. of Major Dwight. She was pious, faithful, industrious and economical. She had ‘all the pride of the family’ in her heart. She ruled the children of the house and indeed the whole street. She was in fact a strong-minded woman and a ‘character’ in the most striking sense of the word. Says John Tappan, Esq.,..”In addition to the fascination of the parlor, there was the faithful African in the kitchen, by the name of ‘Lilly,’ who ever welcomed me and was not one whit behind her mistress in fascinating my young heart.” At more than 40 years, she was hopefully made a member of Christ’s kingdom, when she first learned to read her Bible, which before had no attractions to her. ..” from The History of the Descendants of John Dwight of Dedham, Mass, by Benjamin W. Dwight, pages 130-140, John Trow & Sons, Printers & Bookbinders, NY, 1874.
Bob Drinkwater would point out that these gravestones are white markers for people of color. In his book about searching for the gravestones of African Americans in Western Massachusetts, In Memory of Susan Freedom, he states that many-perhaps most- early Massachusetts residents of color now lie forgotten in unmarked graves on the periphery of common burying grounds and municipal cemeteries. If they were marked at all, it was often with field stones.
Drinkwater believes these two black women’s graves were once segregated at the edge of the cemetery, before white burials expanded around them. Standing side by side he posits that the younger Sarah Gray served in the same household. Elsewhere in the Bridge Street cemetery, he noted at least a few other gravestones for African Americans, buried several decades later, among their white neighbors. One is for Samuel Blakeman who died in 1879. Another is for Mattie “a Negro” who died in 1862. Far fewer graves are evident than the many people we are coming to know once lived and worked in Northampton. Just as the lives of Northampton’s early black residents were often left out of the written record, so too their deaths.
__Massachusetts: a Concise History by Richard D. Brown and Jack Tager. The revised edition was published in 2000 by the University of Massachusetts Press, Amherst.
__Trumbull, James Russell. History of Northampton Massachusetts: From its Settlement in 1654. Volume I. 1898, Volume II. 1902. Northampton.
__Romer, Robert H. Slavery in the Connecticut Valley of Massachusetts. Levellers Press, Florence MA. 2009.
__Warren, Wendy. New England Bound: Slavery and Colonization in Early America. W.W. Norton & Company, New York, London. 2016
__Hardesty, Jared Ross. Black Lives, Native Lands, White Worlds: A History of Slavery in New England. University of Massachusetts Press, Amherst and Boston MA. 2019.
__Carvalho III, Joseph. Black Families in Hampden County, Massachusetts 1650-1865. Revised second edition, 2010. Accessed on academia.edu Oct. 26, 2020.
__Minkema, Kenneth P. ”Jonathan Edward’s Defense of Slavery.” Massachusetts Historical Review, Vol. 4, Race & Slavery (2002), pp.23-59. Massachusetts Historical Society. Courtesy of Historic Northampton.
__Minkema, Kenneth P. “Jonathan Edwards on Slavery and the Slave Trade.” The William and Mary Quarterly, Vol. 54. No. 4 (Oct., 1997), pp. 823-834. Omohondro Institute of Early American History. Courtesy of Historic Northampton
__Harper, Douglass. “Slavery in Massachusetts.” Slavery in the North. Retrieved 2020, Aug 6. http://slavenorth.com/massachusetts.htm.
__Greene, Lorenzo J. The Negro in Colonial New England 1620-1776. Atheneum , New York. edition 1968 of original 1942.
__Piersen, William D. Black Yankees: The Development of an Afro-American Subculture in Eighteenth-Century New England. University of Massachusetts press. Amherst. 1988.
__Gerzina, Gretchen Holbrook. Mr. and Mrs. Prince: How an Extraordinary Eighteenth-Century Family Moved Out of Slavery and Into Legend. HarperCollins Publishers, New York,NY. 2008. My favorite history book, not only local and very readable but illuminating the lives of enslaved blacks and how history is written.
__Sweet, John Wood. Bodies Politic: Negotiating Race in the American North, 1730-1830. University of Pennsylvania Press, Philadelphia PA. 2003.
__Sharpe, Elizabeth and Zeig, Emma Winter. Historic Northampton. Email correspondence Aug-Sep, 2020.
__Bureau of the Census. Bicentennial Edition: Historical Statistics of the United States, Colonial Times to 1970. Part 2, Chapter Z.Series Z 1-19 Estimated Population of American Colonies: 1610-1780. Colonial and Pre-Federal Statistics.
__Stinson, Susan. Spider in a Tree: a novel of the First Great Awakening. Small Beer Press Easthampton MA. 2013.
__Stinson, Susan. Email correspondence Sep. 2020.
__Drinkwater, Bob. In Memory of Susan Freedom: Searching for Gravestones of African Americans in Western Massachusetts. Levellers Press. Amherst MA. 2020. |
The First Magnetometer
When you want to figure out the strength or direction of a magnetic field, a magnetometer is your tool of choice. They range from the simple--you can make one in your kitchen easily--to the complex, and the more advanced devices are regular passengers on space exploration missions. The first magnetometer was created by Carl Friedrich Gauss, who is often called "the Prince of Mathematics," and who published a paper in 1833 describing a new device he called a "magnometer." His design is very similar to the simple magnetometer described below, which you can create in your kitchen.
Because they are very sensitive, magnetometers can be used to find archaeological sites, iron deposits, shipwrecks and other things that have a magnetic signature. A network of magnetometers around the earth constantly monitors minute effects of the solar wind on the earth's magnetic field and publishes the data on the K-index (see Resources). There are two basic types of magnetometers. Scalar magnetometers measure the strength of a magnetic field, while vector magnetometers measure the compass direction.
Creating Your Own
There is a simple vector magnetometer that you can make yourself. A bar magnet, hanging from a thread, will always point north; by marking one end of it, you can spot small variations as the magnetic field changes. By adding a mirror and light, you can take fairly accurate measurements and detect the effects of magnetic storms (for full instructions, see the Suntrek link in Resources).
The Hall Effect
More complicated magnetometers, such as those used on spacecraft, use a variety of methods to detect magnetic field strength and detection. The most common magnetometers are called Solid-State Hall Effect sensors. These sensors use properties of electrical current that are affected by the presence of a magnetic field that does not run parallel to the direction of the current. When there is a magnetic field present, the electrons (or their opposite, electron holes, or both) in the current gather on one side of the conductive material. When it is absent, the electrons or holes run in a basically straight line. The way a magnetic field affects the motion of the electrons or holes can be measured and used to determine the direction of a magnetic field. Hall Effect sensors also produce a voltage that is proportional to the strength of the magnetic field, making them both vector and scalar magnetometers.
Magnetometers in Daily Life
We often encounter magnetometers in our daily lives, though you might not know it, in the form of metal detectors. Hand-held metal detectors used by treasure hunters and hobbyists use the Hall Effect to locate metallic objects. Using a phenomenon known as phase shifting, detectors can differentiate between metals by measuring the resistance or inductance (conductivity) of the object. |
Quoting is the easiest way to use sources, but it can lead to plagiarism if you’re not careful. When you quote a source,
- Always place quotation marks around the notes you copy and paste; the computer does not know you are copying a direct quote unless you tell it.
- Use a quote in your final draft when the author’s words are the best way to express the information. Some readers prefer paraphrases and summaries to quotations.
- Be wary of long quotes. Too many long quotes will make your paper more someone else’s words than your own, and that’s plagiarism.
- Never quote material that includes words you do not understand. If you do not know what a quotation means, you are plagiarizing an idea you don’t comprehend.
- Use the citation style recommended by your professor or instructor for your course’s field of study. Sources are usually identified either by parenthetical citations (information in parentheses) or footnotes. A list of references, works cited, or a bibliography may also be included in the project or paper. |
Bone marrow stromal cells
Bone marrow stromal stem cells (skeletal stem cells)
Cord blood stem cells
Embryonic germ cells
Embryonic stem cells
Embryonic stem cell line
Hematopoietic stem cell
Human embryonic stem cell (hESC)
Induced pluripotent stem cell (iPSC)
Inner cell mass (ICM)
Mesenchymal stem cells
Neural stem cell
Somatic cell nuclear transfer (SCNT)
Somatic (adult) stem cell
Umbilical cord blood stem cells
Adult stem cell - See somatic stem cell.
Blastocoel - The fluid-filled cavity inside the blastocyst, an early, preimplantation stage of the developing embryo.
Blastocyst - A preimplantation embryo consisting of a sphere made up of an outer layer of cells (the trophoblast), a fluid-filled cavity (the blastocoel), and a cluster of cells on the interior (the inner cell mass).
Bone marrow stromal stem cells (skeletal stem cells) - A multipotent subset of bone marrow stromal cells able to form bone, cartilage, stromal cells that support blood formation, fat, and fibrous tissue.
Cell-based therapies - Treatment in which stem cells are induced to differentiate into the specific cell type required to repair damaged or destroyed cells or tissues.
Cell culture - Growth of cells in vitro in an artificial medium for research.
Chromosome - A structure consisting of DNA and regulatory proteins found in the nucleus of the cell. The DNA in the nucleus is usually divided up among several chromosomes.The number of chromosomes in the nucleus varies depending on the species of the organism. Humans have 46 chromosomes.
Clone - (v) To generate identical copies of a region of a DNA molecule or to generate genetically identical copies of a cell, or organism; (n) The identical molecule, cell, or organism that results from the cloning process.
- In reference to DNA: To clone a gene, one finds the region where the gene resides on the DNA and copies that section of the DNA using laboratory techniques.
- In reference to cells grown in a tissue culture dish: a clone is a line of cells that is genetically identical to the originating cell. This cloned line is produced by cell division (mitosis) of the original cell.
- In reference to organisms: Many natural clones are produced by plants and (mostly invertebrate) animals. The term clone may also be used to refer to an animal produced by somatic cell nuclear transfer (SCNT) or parthenogenesis.
Cloning - See Clone.
Cord blood stem cells - See Umbilical cord blood stem cells.
Culture medium - The liquid that covers cells in a culture dish and contains nutrients to nourish and support the cells. Culture medium may also include growth factors added to produce desired changes in the cells.
Differentiation - The process whereby an unspecialized embryonic cell acquires the features of a specialized cell such as a heart, liver, or muscle cell. Differentiation is controlled by the interaction of a cell's genes with the physical and chemical conditions outside the cell, usually through signaling pathways involving proteins embedded in the cell surface.
DNA - Deoxyribonucleic acid, a chemical found primarily in the nucleus of cells. DNA carries the instructions or blueprint for making all the structures
and materials the body needs to function. DNA consists of both genes and non-gene DNA in between the genes.
Embryonic germ cells - Pluripotent stem cells that are derived from early germ cells (those that would become sperm and eggs). Embryonic germ cells are thought to have properties similar to embryonic stem cells.
Embryonic stem cells - Primitive (undifferentiated) cells that are derived from preimplantation-stage embryos, are capable of dividing without differentiating for a prolonged period in culture, and are known to develop into cells and tissues of the three primary germ layers.
Feeder layer - Cells used in co-culture to maintain pluripotent stem cells. For human embryonic stem cell culture, typical feeder layers include mouse embryonic fibroblasts (MEFs) or human embryonic fibroblasts that have been treated to prevent them from dividing.
Fertilization - The joining of the male gamete (sperm) and the female gamete (egg).
Gamete - An egg (in the female) or sperm (in the male) cell. See also Somatic cell.
Germ layers - After the blastocyst stage of embryonic development, the inner cell mass of the blastocyst goes through gastrulation, a period when the inner cell mass becomes organized into three distinct cell layers, called germ layers. The three layers are the ectoderm, the mesoderm, and the endoderm.
Human embryonic stem cell (hESC) - A type of pluripotent stem cell derived from early stage human embryos, up to and including the blastocyst stage. hESCs are capable of dividing without differentiating for a prolonged period in culture and are known to develop into cells and tissues of the three primary germ layers.
Long-term self-renewal - The ability of stem cells to replicate themselves by dividing into the same non-specialized cell type over long periods (many months to years) depending on the specific type of stem cell.
Meiosis - The type of cell division a diploid germ cell undergoes to produce gametes (sperm or eggs) that will carry half the normal chromosome number. This is to ensure that when fertilization occurs, the fertilized egg will carry the normal number of chromosomes rather than causing aneuploidy (an abnormal number of chromosomes).
Mesenchymal stem cells - A term that is currently used to define non-blood adult stem cells from a variety of tissues, although it is not clear that mesenchymal stem cells from different tissues are the same.
Microenvironment - The molecules and compounds such as nutrients and growth factors in the fluid surrounding a cell in an organism or in the laboratory, which play an important role in determining the characteristics of the cell.
Mitosis - The type of cell division that allows a population of cells to increase its numbers or to maintain its numbers. The number of chromosomes in each daughter cell remains the same in this type of cell division.
Neurons - Nerve cells, the principal functional units of the nervous system. A neuron consists of a cell body and its processes - an axon and one or more dendrites. Neurons transmit information to other neurons or cells by releasing neurotransmitters at synapses.
Parthenogenesis - The artificial activation of an egg in the absence of a sperm; the egg begins to divide as if it has been fertilized.
Passage - In cell culture, the process in which cells are disassociated, washed, and seeded into new culture vessels after a round of cell growth and proliferation. The number of passages a line of cultured cells has gone through is an indication of its age and expected stability.
Scientists demonstrate pluripotency by providing evidence of stable developmental potential, even after prolonged culture, to form derivatives of all three embryonic germ layers from the progeny of a single cell and to generate a teratoma after injection into an immunosuppressed mouse.
Polar body - A polar body is a structure produced when an early egg cell, or oogonium, undergoes meiosis. In the first meiosis, the oogonium divides its chromosomes evenly between the two cells but divides its cytoplasm unequally. One cell retains most of the cytoplasm, while the other gets almost none, leaving it very small. This smaller cell is called the first polar body. The first polar body usually degenerates. The ovum, or larger cell, then divides again, producing a second polar body with half the amount of chromosomes but almost no cytoplasm. The second polar body splits off and remains adjacent to the large cell, or oocyte, until it (the second polar body) degenerates. Only one large functional oocyte, or egg, is produced at the end of meiosis.
Preimplantation - With regard to an embryo, preimplantation means that the embryo has not yet implanted in the wall of the uterus. Human embryonic stem cells are derived from preimplantation-stage embryos fertilized outside a woman's body (in vitro).
Regenerative medicine - A field of medicine devoted to treatments in which stem cells are induced to differentiate into the specific cell type required to repair damaged or destroyed cell populations or tissues. (See also cell-based therapies).
Reproductive cloning - The process of using somatic cell nuclear transfer (SCNT) to produce a normal, full grown organism (e.g., animal) genetically identical to the organism (animal) that donated the somatic cell nucleus. In mammals, this would require implanting the resulting embryo in a uterus where it would undergo normal development to become a live independent being. The first mammal to be created by reproductive cloning was Dolly the sheep, born at the Roslin Institute in Scotland in 1996. See also Somatic cell nuclear transfer (SCNT).
Somatic cell - Any body cell other than gametes (egg or sperm); sometimes referred to as "adult" cells. See also Gamete.
Somatic cell nuclear transfer (SCNT) - A technique that combines an enucleated egg and the nucleus of a somatic cell to make an embryo. SCNT can be used for therapeutic or reproductive purposes, but the initial stage that combines an enucleated egg and a somatic cell nucleus is the same. See also therapeutic cloning and reproductive cloning.
Somatic (adult) stem cell - A relatively rare undifferentiated cell found in many organs and differentiated tissues with a limited capacity for both self renewal (in the laboratory) and differentiation. Such cells vary in their differentiation capacity, but it is usually limited to cell types in the organ of origin. This is an active area of investigation.
Teratoma - A multi-layered benign tumor that grows from pluripotent cells injected into mice with a dysfunctional immune system. Scientists test whether they have established a human embryonic stem cell (hESC) line by injecting putative stem cells into such mice and verifying that the resulting teratomas contain cells derived from all three embryonic germ layers.
Therapeutic cloning - The process of using somatic cell nuclear transfer (SCNT) to produce cells that exactly match a patient. By combining a patient's somatic cell nucleus and an enucleated egg, a scientist may harvest embryonic stem cells from the resulting embryo that can be used to generate tissues that match a patient's body. This means the tissues created are unlikely to be rejected by the patient's immune system. See also Somatic cell nuclear transfer (SCNT).
Totipotent - The state of a cell that is capable of giving rise to all types of differentiated cells found in an organism, as well as the supporting extra-embryonic structures of the placenta. A single totipotent cell could, by division in utero, reproduce the whole organism. (See also Pluripotent and Multipotent).
Transdifferentiation - The process by which stem cells from one tissue differentiate into cells of another tissue.
Trophoblast - The outer cell layer of the blastocyst. It is responsible for implantation and develops into the extraembryonic tissues, including the placenta, and controls the exchange of oxygen and metabolites between mother and embryo.
Umbilical cord blood stem cells - Stem cells collected from the umbilical cord at birth that can produce all of the blood cells in the body. Cord blood is currently used to treat patients who have undergone chemotherapy to destroy their bone marrow due to cancer or other blood-related disorders. |
Votes for Women: The Long Struggle
Date: 21st March 2016
Speaker: Muriel Pilkington
The background to the fight for women’s suffrage includes the unexpected information that women in countries such as France, Sweden, Corsica and Sierra Leone had had the right to vote during the 18th century. In 1756, in the USA, Lydia Taft cast her late husband’s vote. However, these rights were subsequently abolished and we learnt that the real origins of the movement for votes for women lay in the French Revolution.
Mary Wollstonecraft, the well-known English writer, philosopher and advocate of women’s rights, observed the French Revolution and published her important work A Vindication of the Rights of Women in 1792. Her book, with its advocacy of women’s equality, was rediscovered in the 1960s during the feminist movement in the UK.
In 1881, women of the Isle of Man became the first British women to vote. Other examples include New Zealand (1893) and Australia (1902), but Great Britain lagged behind these developments.
The Representation of the People Act (1832) extended the vote for men, but since it referred specifically to ‘male persons’ those women who had the right to vote prior to 1832, due to property ownership, were disenfranchised. It has been argued that it was the inclusion of the word ‘male’, providing the first explicit statutory bar to women voting, which provided a source of resentment from which, in time, the women's suffrage movement grew.
In the first half of the nineteenth century women had other inequalities to fight against: they could not hold public office; married women had no property rights; they could not go to university or enter the professions (apart from teaching and nursing); married women’s earnings belonged to their husbands.
Significant events later in the century included Elizabeth Garrett Anderson qualifying as the first female doctor; a school for girls founded by Francis Buss, a pioneer of women’s education; the Matrimonial Causes Act 1857, which allowed wives to divorce their husbands and the repeal of the Contagious Diseases Act 1864 following campaigning by Josephine Butler.
As campaigning continued apace a number of important groups emerged. In Oxfordshire the Central & South of England Society established groups in Oxford, Banbury and Woodstock. The National Union of Women’s Suffrage Societies (NUWSS) was formed in 1896. Members were known as suffragists and brought pressure on the government through lawful and peaceful methods. In contrast, the Women’s Social and Political Union (WSPU), founded in 1903, took a more militant stance under the control of Emmeline Pankhurst and her daughters. These were the suffragettes whose motto was ‘Deeds not Words’.
By 1900 many working class women had joined the fight for women’s suffrage and 7th February 1907 was hugely significant as 3000 women gathered in London to march through the city. Organised by the NUWSS, this was the first demonstration of woman from all classes. Keir Hardie, one of the founders of the Independent Labour Party was among the leaders of the march. The Mud March, as it became known due to the wet conditions, helped make large suffrage processions a key feature of the British suffrage movement and put suffragists in the public eye. It gave the movement an aura of respectability the militant tactics and extreme protests of the suffragettes had failed to achieve.
During 1916-1917, the House of Commons Speaker, James William Lowther, chaired a conference on electoral reform which recommended limited women's suffrage. The prime minster Lloyd George favoured votes for women, acknowledging their role in taking on men’s jobs during WW1. The Representation of the People Act of 1918 gave the vote to women over 30 years of age who met a property qualification. Only 58% of the adult male population was eligible to vote before 1918. This act abolished property and other restrictions for men, and extended the vote to all men over the age of 21 years. Among the restrictions lifted was the fact that only men who had been resident in the country for 12 months prior to a general election were entitled to vote. It was quickly realised that this element effectively disenfranchised troops who had served overseas during the war. Despite these reforms, inequalities between men and women still existed. It was not until the Equal Franchise Act of 1928 that women over 21 years were able to vote and women finally achieved the same voting rights as men. |
is the natural process of dead animal or plant tissue being rotted or broken down. This
process is carried out by invertebrates, fungi and bacteria. The result of decomposition
is that the building blocks required for life can be recycled.
Left: The body of a dead rabbit after several weeks
of decomposition. Most of the flesh has been eaten by beetles, beetle larvae, fly
maggots, carnivorous slugs and bacteria. The outline of the skeleton is starting to
All living organisms on earth will eventually
die. Many plants naturally complete their life cycle and die within a year, but even
the longer lived plants such as trees have a limited natural life span. Nearly all
animals in nature will succumb to disease, being killed or being eaten, it is very rare
for any to make it to old age. If every organism that died did not decay and rot
away, the earth's surface would soon be covered in a deep layer of dead bodies that would
remain intact indefinately. A similar situation would arise if animal and plant
wastes never rotted away. Fortunately this does not happen because dead organisms
and animal wastes become food or a habitat for some other organisms to live on. Some
dead animals will be eaten by scavenging animals such as foxes or crows. Those which
are not eaten by larger animals are quickly decomposed or broken down into their
constituent chemicals by a host of creatures including beetles and their larva, flies,
maggots and worms as well as bacteria, moulds and fungi. Collectively these are
known as decomposers. The lives of many of these organisms depends on the death of
During the process of decomposition, the decomposers provide food for
themselves by extracting chemicals from the dead bodies or organic wastes; using these to
produce energy. The decomposers will then produce waste of their own. In turn, this
will also decompose, eventually returning nutrients to the soil. These nutrients can then
be taken up by the roots of living plants enabling them to grow and develop, so that
organic material is naturally recycled. Virtually nothing goes to waste in
nature. When an animal dies and decomposes, usually only the bones remain, but even
these will decompose over a much longer period of time.
Recycling on the Forest Floor
||Left: "Fibres" of hyphae; the normally unseen component of
fungi which can spread through huge areas of dead leaf litter under the surface of the
forest floor. The hyphae extract substances from the dead material which are
essential to the fungi's own survival. Collectively the hyphae bundle together in
"matted carpets or bundles" known as mycelium. It is only then that the
hyphae become visible to the unaided eye when dead leaf litter is disturbed.
|Many plants that die along with the leaves that fall from
trees in the autumn will all rot down and become part of the forest floor. They are
decomposed by fungi, bacteria and many different species of invertebrate. Fungi
unseen from the surface can spread through the entire forest floor, living on the
dead leaves and twigs that have fallen from the trees above. They can extract many
of the useful substances for their own benefit, helping to rot down the dead plant
material in the process. Many of the chemicals which remain after decomposition get
dissolved in the soil and become nutrients for living plants including newly germinated
seedlings. These nutrients can be taken up by the plant's roots in the soil and are
used to help make new leaves, twigs, branches, roots, flowers and seeds.
||Left: Decomposition is an important of all life cycles. In a
forest, dead leaves that fall from deciduous trees in the autumn form a thick carpet on
the forest floor. Decomposition reduces these leaves first into a compost and then
into nutrients which return to the soil and enable new plant growth to take place.
Decomposition is an important part of all ecosystems.
It is not just on a forest floor that decomposition is important.
Death and decomposition are an essential part of all life cycles on earth. To
enable successful birth and growth of young plants and animals, older specimens must die
and decompose. This limits the competition for resources and provides a fresh source
of essential nutrients for new generations of life. |
It’s funny, but not all of the scientists we talk about on this website are actually famous. Some of them, like Aristarchus, deserve to be… but they’re not.
If you’re looking for an unsung hero of science, you could do worse than Aristarchus of Samos, or Aristarchus the Mathematician as some people called him. Today, a better name might be Aristarchus, who discovered that the earth orbits the sun.
Aristarchus was born in about the year 310 BC, probably on the Greek island of Samos, the same island that Pythagoras had been born on 260 years earlier. We know very little about Aristarchus’s life, but we know enough to be astounded by his science.
We know that Aristarchus lived at about the same time as two of our other scientific heroes, Archimedes and Eratosthenes, and that he was 20 to 30 years older than them. We know that his greatest work has been lost in the mists of time. We know a little about this work because Archimedes mentions it in a work called The Sand Reckoner, more of which soon.
Lifetimes of Selected Ancient Greek Scientists and Philosophers
Copernicus says that Earth orbits the Sun
To appreciate what Aristarchus did over 2000 years ago, it’s worthwhile thinking about one of the greats of astronomy, Nicolaus Copernicus.
In 1543 Nicolaus Copernicus published his famous book: On the Revolutions of the Heavenly Spheres. He told us that Earth, and all the other planets, orbit the sun. In other words, he said that the Solar System is heliocentric.
Until Copernicus published his work, people thought we lived in a geocentric Solar System – i.e. that Earth was at the center of everything. They believed that the moon, the planets, the sun and the stars orbited the earth.
The geocentric idea was taught by the Catholic Church, and Copernicus was a member of that church. Copernicus’s book was suppressed by the church, but gradually, his theory came to be accepted.
In fact, however, Copernicus was rather late coming to the heliocentric view.
Aristarchus beat him by 18 centuries.
Archimedes tells us about Aristarchus’s Book
Sadly, the book Aristarchus wrote describing his heliocentric Solar System has been lost – the fate of many great Ancient Greek works. Fortunately, we know a little about it, because it is mentioned by other Greeks, including Archimedes, who mentions it in a letter he addressed to a King named Gelon. This letter was ‘The Sand Reckoner.’ Archimedes wrote:
“You know the universe is the name astronomers call the sphere whose radius is the straight line from the center of the earth to the center of the sun. But Aristarchus has written a book in which he says that the universe is many times bigger than we thought. He says that the stars and the sun don’t move, and that the earth revolves about the sun and that the path of the orbit is circular.”
Aristarchus must have used the concept of parallax to show that the stars are a very large distance from Earth. In doing so, he expanded the size of the universe enormously.
It would be marvelous if we could learn the details of Aristarchus’s observations, calculations, arguments, could read his notes and see his diagrams; but, unless a copy of his ancient book can be discovered in some forgotten, dusty corner of an ancient library, that is a pleasure we shall never have.
Aristarchus also believed that, in addition to orbiting the sun, Earth was spinning on its own axis, taking one day to complete one revolution.
It’s sometimes said that there was pressure for Aristarchus to be put on trial for daring to say that the earth is not at the center of the universe. It turns out this was a mistranslation of a work by the Greek historian Plutarch.
There was no persecution of Aristarchus. His idea just didn’t find many fans. Most Ancient Greeks rejected his work, and continued to believe in a geocentric Solar System.
Thankfully, Archimedes was happy to use Aristarchus’s model of the universe in The Sand Reckoner, to discuss calculations using larger numbers than the Greeks had used before.
Only one of Aristarchus’s works has survived, in which he tried to calculate the sizes of the moon and sun and tried to figure out how far they were from Earth. He already knew the sun was much larger than Earth by observing Earth’s shadow on the moon during a lunar eclipse, and he also knew that the sun was much farther away from us than the moon.
Although the optical technology of his time didn’t allow Aristarchus to know the finer details of our Solar System, his deductions were absolutely correct based on what he could actually see. What he lacked in technology, he made up for in deductive genius.
What Aristarchus got Right
23 centuries ago, Aristarchus’s proposed, with evidence, that the earth and the planets orbit the sun. He further deduced that the stars are much farther away than anyone else had imagined, and hence that the universe is much bigger than previously imagined. These were major advances in human ideas about the universe.
What did Copernicus know about Aristarchus’s Work?
Copernicus actually acknowledged in the draft of his own book that Aristarchus might have said the earth moved around the sun. He removed this acknowledgement before he published his work.
In Copernicus’s defense, he was probably unaware of The Sand Reckoner by Archimedes, because, after its rediscovery in the Renaissance, The Sand Reckoner only seems to have existed as a few hand-written copies until it was finally printed in 1544. By then Copernicus had published his own book and had died. What he knew of Aristarchus probably came from the following very brief words written by Aetius:
“Aristarchus counts the sun among the fixed stars; he has the earth moving around the ecliptic [orbiting the sun] and therefore by its inclinations he wants the sun to be shadowed.”
Galileo knew that Aristarchus was the First Heliocentrist
Galileo Galilei, who most certainly had read The Sand Reckoner, and understood its message, did not acknowledge Copernicus as the discoverer of the heliocentric Solar System. Instead, he described him as the ‘restorer and confirmer’ of the hypothesis.
Clearly, Galileo reserved the word ‘discoverer’ for Aristarchus of Samos.
Aristarchus lived for about 80 years. If we could have built on his insights, rather than forgetting about them for so many centuries, I wonder how much further we might have come in our understanding the Universe.
Our Cast of Characters
Aristarchus lived in Ancient Greece. He was born in about 310 BC and died in about 230 BC.
Pythagoras lived in Ancient Greece. He was born in about 570 BC and died in about 495 BC.
Archimedes lived in Ancient Greece. He was born in about 287 BC and died in 212 BC.
Nicolaus Copernicus lived in Poland. He was born 19 February 1473 and died 24 May 1543.
Galileo Galilei lived in Italy. He was born 15 February 1564 and died 8 January 1642.
Author of this page: The Doc
© All rights reserved.
Cite this Page
Please use the following MLA compliant citation:
"Aristarchus." Famous Scientists. famousscientists.org. 18 Aug. 2014. Web. <www.famousscientists.org/aristarchus/>.
Sir Thomas Heath
Aristarchus of Samos: the ancient Copernicus
Oxford at the Clarendon Press, 1913
The Forgotten Revolution: How Science Was Born in 300 BC and Why it Had to Be Reborn |
Queen Anne & Shingle: 1880 To 1900
Queen Anne houses are brick with wood shingled or stuccoed upper floors, or wood with surfaces variously sided with clapboards (horizontal wood boards) and an assortment of shingle patterns.
Houses are irregular in plan, asymmetrical in form, and have hip or multi-gabled roofs, or a combination of both. Towers, dormer windows, stained glass windows, bay windows, turrets (small towers at the corners of buildings), encircling porches, and tall chimneys with decorative brick patterns are typical.
Queen Anne houses often have windows of many different designs. Elements of Gothic Revival, Stick Style, Eastlake, and Classic architecture may be included in houses of the style. Color was an important part of the Queen Anne style. First floors may have been painted one color, with a contrasting color used for upper stories, and one or more additional colors to highlight details.
Queen Anne is the style that represents "Victorian" to many people. It is visually the liveliest of the styles of the Victorian era and was popular in Cincinnati and throughout the United States. The style originated in England in 1868. Many of the elements of the style were borrowed from an earlier period of English architecture under the reign of Queen Anne, for whom the style was named.
The Queen Anne style became fashionable in the United States in the late 1870s and reached Cincinnati in about 1880. Among neighborhoods that have large numbers of houses of the style are Northside, Avondale, Walnut Hills, and Clifton. These areas were largely developed in the 1880s and 90s because of the expansion of the electric streetcar system.
Evolving from Queen Anne was a style that exhibits many of its characteristics, the Shingle style. Shingle was a New England style that was not widely popular in Cincinnati. These later houses are also irregular in plan and many include turrets, towers, dormer windows, bay windows, and porches. Ornament is greatly reduced and roofs are lower pitched, giving houses a horizontal orientation, contrasting with the vertical look of Queen Anne houses. The most important feature of the style, from which it gets its name, are the wood shingles that were used to cover the entire house. |
By: Anjali Joshi
Homework always seems to be a strong point of contention for many parents. There is always a diverse range of opinions when it comes to the amount of homework – some parents feel like children should have very limited homework so they can spend time with family and doing other extracurricular activities. However, often times among South Asian families, parents feel like their children don’t receive enough homework. Many schools have adopted a homework policy of “10 minutes per grade” such that a second grade child would have 20 minutes of homework and a sixth grader would have 60 minutes. This general rule doesn’t always fit the diverse needs of children as time spent completing homework varies widely amongst children and many parents and teachers agree that home reading should not be included in this time.
Types of Homework
- Practice: This kind of homework is meant to reinforce learning that occurred in the classroom, practice, and help mastery of skills. This type of homework is critical in developing foundational concepts, such as reading and number sense.
- Preparation: This kind of homework initiates the learning process outside of the classroom, getting the child thinking about concepts before they are presented or discovered in class. This type of homework sparks the inquiry process in a child’s brain thereby making them absorb the knowledge built in the classroom more readily.
- Extension: This kind of homework requires students to apply skills to novel situations thereby extending their learning. Extension activities are particularly important to further solidify understanding of challenging concepts.
- Integration: This kind of homework requires students to integrate concepts and skills learned in multiple subject areas.
How Parents Can Help: a TEAM Approach
- Tools: Make sure the right tools are available (paper, pencils, calculator, dictionary, etc.)
- Environment: Provide a quiet place where homework is completed, free of distractions (such as a television)
- Attitude: Be mindful of your own attitude toward homework. Children are very perceptive. If you are frequently frustrated and annoyed with their homework, they will be too.
- Model: When your children sit down to read, take this as an opportunity to sit down and read too. When they are doing their math homework, get a head start on your taxes or banking! This shows children that homework extends to adult life and has many practical applications.
How to Address Math Homework Challenges
Often times, parents are challenged by math homework and assignments simply because of significant changes in the content of curriculum as well as the manner in which certain concepts are taught. The best way to ensure that your children are supported is to engage in frequent communication with the teacher.
- Try to be aware of the methods by which your child being taught mathematical concepts, and don’t teach your child strategies and shortcuts that conflict with the approach the teacher is using as this can impede the learning process rather than aid it.
- Math is often taught with concrete manipulatives in the classroom. Ask the teacher if your child can sign out these manipulatives to use at home to complete homework.
- Lastly, online resources are in abundance that can be used to help your child practice and solidify math concepts and skills.
As parents, we are partners in education. Our involvement and support with the homework routine can help our children build positive attitudes and achieve success inside and outside the classroom.
©masalamommas and masalamommas.com, 2016-2017. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Links may be used, provided that full and clear credit is given to masalamommas.com and Masalamommas online magazine with appropriate and specific direction to the original content. |
Healthy eating habits for kids
Learn how to help develop and maintain healthy eating habits for kidsGrowing up in my household, there were three important rules: keep your room clean, don't hit your sisters and finish your veggies. Although I despised spinach, squash and zucchini as a child, these foods are now a staple in my refrigerator crisper.
As an adult, I've learned to love eating healthy foods because I was introduced to them early. In the past two decades, studies have found that childhood obesity has more than tripled. Developing healthy eating habits for kids in your home is the first step you can take in preventing childhood obesity, as well as keeping your child from becoming an obese adult.
Obesity in adults currently the leading preventable cause of death for Americans. If you're wondering what the difference between obesity and overweight is, calculate your child's body mass index (BMI). If your child has a BMI in excess of 25, he or she is considered to be overweight. If your child has a BMI in excess of 30, he or she is considered to be obese.
Whether you're a first time parent like myself or you're looking to turn around the bad food habits of your older kids, these tips will help you establish healthy eating habits for kids of all ages.
- Start early! (but not too early)
Believe it or not, your first step in establishing healthy eating habits for kids starts in infancy. Recent pediatric research suggests that starting infants on solids too soon can make your child more likely to become obese later in life. Many doctors suggest delaying the introduction of solids until they are at least 6 months of age. Also, introduce vegetables first and postpone introducing sweet fruits such as apples, pears and bananas until your infant has gotten used to eating their vegetables.
- Get in on the act.
If you're attempting to correct the bad eating habits your older children already have, it's important to clear the house of all unhealthy foods. This is a case where it's important to lead by example. If your kids are supposed to eat healthy, then you should too. By eliminating the unhealthy foods in your house, you can remove temptations and keep track of what your kids are eating. Also, adding a daily children's vitamin supplement to your child's diet can help ensure that they're getting the essential vitamins and minerals they need to be healthy.
- What's on the menu?
Before your next trip to the grocery store, create a weekly menu and buy the items you'll need in order to fulfill your menu. By drawing up this menu, you can prevent indecisive week nights and be sure that every meal contains all the important food groups your child needs to thrive.
- Fast food stops here.
Many American families depend on unhealthy fast food chains to supply them with breakfast, lunch and dinner several times a week. By cooking more meals at home, you can keep track of exactly how many calories your child is eating and how many servings of fruits and vegetables they are getting.
- Pack a lunch.
Although many schools are attempting to spruce up their lunch menus, many are still lacking lower fat, nutritious options. By packing your child's lunch, you can prevent them from consuming unhealthy school lunch foods. Try making a turkey breast sandwich on whole wheat with low fat cheese, tomatoes and romaine lettuce. Instead of potato chips, pack raw carrots and celery with peanut butter and raisins. Add pecans, almonds and walnuts, dried fruit and whole grain crackers for an energy-laden "dessert."
- Leave the "Clean Plate Club".
Chances are, when you were a child, you were encouraged to finish everything on your plate so that you could be the president of the "Clean Plate Club". Even if you were already stuffed to the gills, you'd finish those final few bites just to satisfy your parents. Unfortunately, all we were doing was stretching our stomachs and making it necessary for us to eat more just to feel full and satisfied. Tell your child to finish their veggies first, then allow them to eat as much of their starch and meat as they want. When they say, "I'm full", allow them to leave the table.
If you maintain a healthy diet in your household, your child has a better chance of not developing heart disease, type 2 diabetes and other obesity related illnesses. In the long run, healthy eating habits for kids are important to establish early to ensure that you're giving your child the best start possible. |
1.Sociology is the study of human behavior in society. The sociological imagination works by
watching people and wondering how society influences people's individual lives and group life.
2.The core sociological concep
Race is a distinction between categories of people based on physical characteristics.
Ethnic groups are social categories of people who have the same culture. Stereotypes
reinforce prejudices and cause them to pers
Social Sophie Adams
Social stratification is a fixed, hierarchical arrangement in society. There are groups in
this society that have different access to resources, power, and perceived social worth.
There are three forms of soc
1. There is definitely an impact of various types of groups. Dyads only consist of two
people, thus making the group quite stable. However, when that dyad becomes a triad
(three people), things get more complicated. A
1. The socialization process and the agents of socialization influence identity development
and social control. The way one is raised influences how they view the expectations of
society. Through culture, religion, fam
1.There are five characteristics of all cultures. These include: culture is shared, culture is learned,
culture is taken for granted, culture is symbolic, and culture varies across time and place. The culture
that is m
1. The scientific method gives sociologists a set of steps to follow when doing research.
Sociologists use inductive reasoning to get general conclusions from simple observations. They
use deductive reasoning to create |
Richard R Johnson
Causes of separation of the United States from the British empire; political theory of the Revolution; its military history; diplomacy of the Revolution; the Revolution as a social movement; intellectual aspects; readjustment after independence; the formation of the American union; the Constitution.
This course will study the American Revolution, from its mid-eighteenth century origins to the ratification of the Federal Constitution of 1787. Its general format will be chronological, but with emphasis placed on such key themes as Anglo-colonial relations, the rhetoric and methods of revolutionary resistance, the conduct and consequences of the War of Independence, constitution making and the nature of republican government, the Revolution as a social movement, its losers and well as its winners, and the forces at work in the "critical" period of the 1780s. We will also consider how scholars and the American public have evaluated the Revolution and its legacy. Besides gaining a fuller understanding of the significance of these themes for the unfolding of American history, students will also be encouraged by the format of the class to sharpen their skills of critical thinking and expression. Class discussions and written assignments center on the readings described below. Each weekly unit of the readings is designed to illuminate some particular episode or issue in early American history, and to enable students to assess the cultural values of the past and act as historians in constructing their own documented analyses through discussion and writing. This is therefore a history course in the double sense that students can expect to learn both about the nature of the past (in this case arguably the most formative and exciting period of American history) and about how they candevelop the skills of thinking and research needed for effective study of that past, and its legacy for the present.
Student learning goals
Students will obtain a knowledge of the origins, nature, legacy and significance of the American Revolution, as above
Students will gain insight into how historians have studied and evaluated the history of the Revolution, and how historical scholarship can be analysed and critiqued.
Students will have been able to develop their own skills in research and writing by evaluating primary source materials and selected historical questions both through individual projects and working on small group presentations
Through one-page papers prepared to fuel class discussion, students will learn how to write concise, analytical, and documented prose.
General method of instruction
Three ninety-minute class sections a week, two given generally to lectures by the instructor and one to discussion of assigned readings
No prerequisites, except a lively curiosity about the origins of American society. During the duration of the courses, however, regular attendance--at the lectures and more especially at the weekly discussion section--is essential for success, along with a readiness to complete the assigned readings week by week, so as to contribute to class discussions and the timely completions of assignments.
Class assignments and grading
Attendance at the two lectures a week, and at the third meeting, each Thursday, given to class discussion. Two 5-7 page papers (a comparative book review and an assessment of a primary source) that can be rewritten, plus several one-page papers based on the weekly reading. Students will also be asked to do small research projects of their own, and in groups. No midterm but a takehome final exam consisting of essay questions. Studentís work will be judged according to the strength, clarity, and concision of its arguments, its capacity to employ and analyze the appropriate course materials, and the relevance of its response to its chosen topic. This is a W-course, with a consequent emphasis upon writing assignments.
Generally, grades are assigned on the basis of 25% each for the book review, documentary analysis, and takehome final exam, and the remaining 25% for the one-page papers and class performance in the weekly discussions. All assignments must be completed to get credit for the course (which also carries W-course credit). |
Garlic mustard is a known aggressive plant newly introduced to Oregon and expected to become widespread if no action is taken. Introduced to the East coast from Europe, this plant now carpets forest understories of the Northeast and Midwest and continues to spread westward across the United States. In Oregon, garlic mustard is established in the Portland and The Columbia Gorge and a new population was recently found in the Rogue River valley.
As one of the few invasive plants capable of dominating undisturbed forests understories, garlic mustard has the potential to alter forest communities. It can change the tree composition of the forest by suppressing hardwoods such as maples and ashes. An abundance of garlic mustard can alter the suitability of habitats for native birds, mammals, and amphibians.
Garlic mustard is a biennial that can be either self-pollinated or cross-pollinated. Each plant can produce hundreds of seeds, which germinate after a period of dormancy in late February or early March until May. The seeds are believed to be primarily dispersed by human activity, but are also spread by flowing water, birds, and rodents, and may possibly even catch a ride on the fur of larger animals, such as deer.
Since Garlic Mustard has no known natural enemies, is self-fertile, and difficult to eliminate, the most effective control method is to prevent its initial establishment. Burning, herbicides, or cutting are often used to control large existing populations and biocontrol methods are being developed.
Nuzzo, Victoria, Natural Area Consultants. “Element Stewardship Abstract for Allaria petiolata (Allaria officinalis) Garlic Mustard.” The Nature Conservancy
Stinson KA, Campbell SA, Powell JR, Wolfe BE, Callaway RM, et al. (2006) Invasive plant suppresses the growth of native tree seedlings by disrupting belowground mutualisms. PLoS Biol 4(5): e140. DOI: 10.1371/journal.pbio.0040140 |
Ron Ferguson, an economist at Harvard, has made a career out of studying the achievement gap — the well-documented learning gap that exists between kids of different races and socioeconomic statuses.
But even he was surprised to discover that gap visible with "stark differences" by just age 2, meaning "kids aren't halfway to kindergarten and they're already well behind their peers."
And yet, there's a whole body of research on how caregivers can encourage brain development before a child starts any formal learning. It's another example, Ferguson says, of the disconnect between research and practice. So he set out to translate the research into five simple and free ways adults can help their little ones.
"Things that we need to do with infants and toddlers are not things that cost a lot of money," he explains. "It's really about interacting with them, being responsive to them."
He calls his list the Boston Basics, and he's on a mission to introduce it to caretakers first in Boston and then across the country.
The principles are:
- Maximize love, manage stress. Babies pick up on stress, which means moms and dads have to take care of themselves, too. It's also not possible to over-love or be too affectionate with young children. Research shows feeling safe can have a lasting influence on development.
- Talk, sing and point. "When you point at something, that helps the baby to start to associate words with objects," Ferguson explains. Some babies will point before they can even talk.
- Count, group and compare. This one is about numeracy. Babies love numbers and counting, and there's research to show they're actually born with math ability. Ferguson says caregivers can introduce their children to math vocabulary by using sentences that compare things: "Oh, look! Grandpa is tall, but grandma is short" or "There are two oranges, but only three apples."
- Explore through movement and play. "The idea is to have parents be aware that their children are learning when they play," Ferguson says.
- Read and discuss stories. It's never too early to start reading aloud — even with babies. Hearing words increases vocabulary, and relating objects to sounds starts to create connections in the brain. The Basics also put a big emphasis on discussing stories: If there's a cat in the story and a cat in your home, point that out. That's a piece lots of parents miss when just reading aloud.
So how do these five principles get into the hands — and ultimately the brains — of Boston's babies?
Ferguson and his team decided the Basics have to go where the parents are. They're partnering with hospitals to incorporate the five principles into prenatal care and pediatrician visits. They work with social services agencies, home-visiting programs, barbershops and local businesses. Ferguson even teamed up with a local church to deliver a handful of talks at the pulpit after Sunday services.
Tara Register runs a group for teen moms at the Full Life Gospel Center in Boston. She says when she learned about the Basics, she thought, "This would be the perfect place. We've got these young moms learning how to parent and trying to figure this out."
Register wishes she had known about the five principles back when she was a teen mom. Years later, she's now helping get the word out to a new generation. She says when she talks about the Basics in her group, the teenage parents are surprised to discover that so much learning happens so early. "Some of this stuff they're probably doing already and they didn't even know there was a name behind it or development behind it."
And that's true for most caregivers. A lot of this comes naturally; the key is to connect those natural instincts to what researchers know about developmental science — something all parents can learn from, Ferguson says. "I have a Ph.D. and my wife has a master's degree, but I know there are Boston Basics that we did not do."
Back in Register's class, she holds one of the babies and points to his head — and the developing brain inside. "You can't imagine how much of a sponge this is right here," she says. The teens brainstorm ways they'll incorporate the Basics into their daily routine. "I'll narrate what I'm doing as I get ready for work," one suggests. "I'll count out the number on his plaything," another offers.
As Register wraps up her lesson, she has one final thought for the group, which she repeats several times. It's essentially the thesis behind all five of the Boston Basics: "Our babies are incredible," she tells the new moms. "They are complex, they are incredible, they are smart. They can take it all in. So don't underestimate them." |
Harnessing the power of bacteria
Looking for alternatives to world reliance on fossil fuels for energy, an interdisciplinary team of UW-Madison researchers is studying ways to generate electricity by feeding a species of photosynthetic bacteria a steady diet of sunshine and wastewater.
The concept of such so-called microbial fuel cells emerged nearly three decades ago when an English researcher fed carbohydrates to a bacteria culture, connected electrodes and produced tiny amounts of electricity. Although a few research groups are studying them, microbial fuel cells largely live in the realm of laboratory entertainment and high-school science experiments, says Civil and Environmental Engineering Professor Daniel Noguera. “Now, the idea is taking shape that this could become a real alternative source for energy,” he says.
Noguera, Civil and Environmental Engineering Professor Marc Anderson and Assistant Professor Trina McMahon, Bacteriology Professor Timothy Donohue, Senior Scientist Isabel Tejedor-Anderson, and graduate students Yun Kyung Cho and Rodolfo Perez, hope to develop a large-scale microbial fuel cell system for use in wastewater treatment plants. “It’s inexpensive,” says Noguera, of the nutrient-rich wastewater food source. “We treat the wastewater anyway, so you are using a lot of energy to do that.”
In nature, says McMahon, photosynthetic bacteria effectively extract energy from their food — and microbial fuel cells capitalize on that efficiency. “By having the microbes strip the electrons out of the organic waste, and turning that into electricity, then we can make a process of conversion more efficient,” she says. “And they’re very good at doing that — much better than we are with our high-tech extraction methods.”
Through machinery like plants, photosynthetic bacteria harvest solar energy. They also make products to power microbial fuel cells. “In many ways, this is the best of both worlds — generating electricity from a ‘free’ energy source like sunlight and removing wastes at the same time,” says Donohue. “The trick is to bring ideas from different disciplines to develop biorefineries and fuel cells that take advantage of the capabilities of photosynthetic bacteria.”
The benefit of using photosynthetic bacteria, he says, is that solar-powered microbial fuel cells can generate additional electricity when sunlight is available.
Currently, the microbes live in sealed, oxygen-free test tubes configured to resemble an electrical circuit. Known as a microbial fuel cell, this environment tricks the organisms into delivering byproducts of their wastewater dinner — in this case, extra electrons — to an anode, where they travel through a circuit toward a cathode. Protons, another byproduct, pass through an ion-exchange membrane en route to the cathode. There, the electrons and protons react with oxygen to form water.
One microbial fuel cell produces a theoretical maximum of 1.2 volts; however, like a battery, several connected fuel cells could generate enough voltage to be useful power sources. “The challenge is thinking about how to scale this up from the little toys we have in the lab to something that works in the home, on farms, or is as large as a wastewater treatment plant,” says Noguera.
For now, the researchers are combining their expertise in materials science, bacteriology and engineering to optimize fuel cell configuration. |
Learning Styles and Preferences
Strategies to Strengthen These Learning Styles
Visual learners learn best from what they see: diagrams, flowcharts,
time lines, films, and demonstrations.
- Add diagrams to your notes whenever possible.
- Organize notes so that you can
clearly see main points and supporting facts and how ideas are
- Use visual organizers (graphs,
charts, symbols, etc.) to help show relationships between concepts/ideas.
- Color-code notes to help you to see categories of information.
- Use visualization as a way to study/prepare for tests and
to retrieve information. (See Mnemonic
Verbal learners gain the most learning from reading, hearing
spoken words, participating in discussions, and explaining things
- Attend lectures and tutorials.
- Ask questions to hear more information.
- Read the textbook and highlight no more than 10%.
(See Annotating Text.)
- Record lectures.
- Rewrite your notes and add what you missed from the tape.
- Recite or summarize information. (See Chunking.)
- Talk about what you learn. Work in study groups.
- Review information by listening to tapes you have recorded.
Active learners need to experience knowledge through their own
actions either by "doing" or by getting personally involved
in their learning. They prefer quick paced instruction-- and instructors
that keeps things moving.
- Utilize as many senses as possible while learning.
- Go to labs, exhibits, tours, etc. to experience the concepts
- Try out example problems and questions.
- Study in a group.
- Relate the information to concrete examples as you read or
listen in lectures.
- Think about how you will apply the information being presented.
(See Cognitive Structures.)
- Pace and recite while you learn.
- Act out material or design learning games.
- Use flash cards with other people.
- Teach the material to someone else.
Reflective learners understand information best when they
have had time to reflect on it on their own (and at their own
- Study in a quiet setting.
- When you are reading, stop periodically to think about what
you have read. (See Chunking.)
- Don't just memorize material; think about why it is important
and how ideas are related. (See Cognitive
- Write short summaries of what the material means to you.
Factual learners prefer concrete, specific facts, data, and
- Ask the instructor how ideas and concepts apply in practice.
- Ask for specific examples of the ideas and concepts.
- Brainstorm specific examples with classmates or by yourself.
- Think about how theories make specific connections with the
real world. (See Questioning.)
Theoretical learners are more comfortable with big-picture ideas,
symbols, and new concepts.
- If a class deals primarily with factual information, try to
think of concepts, interpretations, or theories that link the
- Because you become impatient with details, take the time to
read directions and test questions before answering, and be
sure to check your work. (See Test-taking
- Look for systems and patterns to arrange facts in a way that
makes sense to you. (See Visual Organizers.)
- Spend time analyzing the material. (See Questions.)
| Linear (Left Brain)
Linear thinkers find it easiest to learn material presented
step by step in a logical, ordered progression. They can work with
sections of material without fully understanding the whole picture.
- Choose highly structured courses and instructors.
- If you have an instructor who jumps around from topic to topic,
spend time outside of class with the instructor or a classmate
who can help you fill the gaps in your notes. (Use mapping
techniques for taking notes.)
- If class notes are random, rewrite the material according
to whatever logic helps you to understand it. (See Cornell
- Outline the material.
| Holistic (Right Brain)
Holistic thinkers progress in fits and starts. They may feel
lost and unable to solve problems, until they can see the big picture
and the relationships between ideas. They need to make sense of
details. They tend to be creative.
- Recognize that you are not slow or stupid.
- Before reading the chapter, preview
it by reading all the subheadings, summaries, and any margin
- Instead of spending a short time on every subject every night,
try immersing yourself in just one subject at a time.
- To concentrate on one course at a time, take difficult subjects
in summer school or when you have fewer courses. (Warning: Make
sure you have enough time to study and to prepare projects and
papers. The same amount of material is covered in a shorter
time in summer and intersession classes.)
- Relate subjects to things you already know. Ask yourself how
you would apply the material. (See Questions.)
- Use maps and visual
organizers to help yourself get the big
Adapted in part from a web site developed by Richard M. Felder and
Barbara A. Solomon, North Carolina State University at: www2.ncsu.edu/unigy/lockers/users/f/felder/public/ILSdir/styles.htm |
Digital Imaging, Part 1 - PPI
Digital imaging in today's world has become extremely popular. Many have replaced traditional cameras with digital ones, and others are scanning (converting) older prints and film into digital equivalents.
However, as digital technologies replace older technologies, quality control becomes extremely important. One measure of quality is PPI, or "pixels per square inch." Unfortunately, PPI is often misunderstood.
To understand PPI, it is important to understand what a digital image actually is. To a computer, a digital image is simply a collection of tiny colored "dots" (interpreted as small squares) called pixels arranged in rectangular fashion. As a whole, the computer sees the image as information. It has no way of knowing what the subject of the image is of, or whether the image is in focus or not.
To a human being, these pixels collectively form an image that has some sort of meaning. We don't see colored blocks, but rather a beautiful mountain vista or a moment during a birthday party.
It is critical to understand that a pixel by itself has no real-world or physical value. It is a piece of information, not a physical item. As a result, because a digital image is composed entirely of pixels, it also lacks tangible dimensions. This becomes a problem when we want to print such an image, because the printer won't know what to do with the pixels in the image.
PPI gives a photo physical dimensions by simply allocating a specified number of pixels to a square inch. It does this by dividing the pixels into equal, square regions. When PPI is changed, the pixels in the image itself never change - only the pixel allocation per inch (pixel grouping size) changes.
For example, when we increase the PPI, we are actually decreasing the total print dimensions (tangible dimensions, measured in inches) of the image. Each square inch will contain more pixels, and the print dimensions of each pixel decreases in size to accommodate the extra pixels per inch. The image will print smaller, but with better quality than the other image.
In contrast, if we decrease the PPI, we increase the total print dimensions of the image. Each square inch will contain less pixels, and the print dimensions of each pixel increases in size. The image will print larger, but with less quality than the other image.
In another example, let's say we have two digital images. These images are nearly identical. Both are of the same subject. Both are 4x6 inch images. However, one image is 150 PPI and the other is 300 PPI. The 300 PPI image will have a much larger file size (usually measured in kilobytes (KB) or megabytes (MB)) than the 150 PPI image. This is because the 300 PPI image has double the total pixels than the 150 PPI image.
If both images are printed on a common printer with the same printer settings, both will print out exactly the same size (4x6 inches). However, the 300 PPI image will print much clearer and crisper than the 150 PPI image.
Generally speaking, a 4x6 inch photo with low PPI will look grainy (or 'pixelated') when printed, whereas a 4x6 inch photo with a very high PPI will look cleaner and crisper. The more pixels you have in your image, the more flexible PPI control will be.
An important note should be made concerning PPI versus DPI. PPI ("pixels per square inch" and DPI ("dots per square inch") are often used interchangeably. However, DPI doesn't always refer to pixels. Keep that in mind when purchasing digital-imaging products or related accessories.
Most modern imaging devices (scanners, cameras, printers, etc) take a lot of the guesswork out of digital imaging and photography in an attempt to maximize quality, but having a basic understanding of what PPI actually is, and how it works gives you more control over your digital images.
In subsequent articles in this Digital Imaging series, PPI will be discussed in further detail and how it applies to scanners, printers and other digital imaging technologies.
Have comments or suggestions for a weekly Tech Tips article? Send an email to [email protected]. |
In How fast is bit packing?, we saw how to store non-negative integers smaller than 2N using N bits per integer by a technique called bit packing. A careful C++ bit packing implementation is fast: e.g., over 1 billion integers per second.
However, before you pack the integers, you might need to scan them to determine the number of bits needed (N). Unfortunately, it is a relatively expensive process.
Given a positive integer x, we seek the smallest integer N such that the integer x is less than 2N. The value N is often called the integer logarithm of x.
There are several clever techniques to compute the integer logarithm using portable C code. Yet you can do better using processor-specific instructions. The GNU GCC compiler makes this easy with a special function that counts the number of leading zeros for 32-bit integers (__builtin_clz). Even so, it is relatively slow.
Thankfully, you can avoid computing the integer logarithm of each integer by a simple test involving a right shift:
b = integer_logarithm(x);
With proper loop unrolling, this is nearly as fast as bit packing.
Update: Preston Bannister correctly points out that you can do much better. Simply compute the logical or between all integers and then compute the integer logarithm of the result. It is much, much faster.
To experiment with this problem, I wrote a small program which finds the maximum integer logarithm of a large array of random integers. It then packs the integers using this logarithm.
- I find that I can pack between 1 billion and 2 billions integers per second.
- I compute the maximum integer logarithm at a rate of 3 billion integers per second.
When plotting the speeds as functions of the actual maximum integer logarithm, we see that the computation of the logarithm is not sensitive to the value of the actual logarithm, except for the approach based on the __builtin_clz function which is slower when the logarithm is less than 8.
In my tests, I used the GNU GCC 4.6.2 compiler on an Intel core i7 processor. My code is freely available.
Conclusion When packing an array of integers, finding the maximum logarithm can take anywhere from 1/4 to 1/3 of the running time. However, brute force techniques that compute the integer logarithm of every integer are much slower. |
Domestic cats are highly susceptible to plague. These animals also are known to have been sources of Y. pestis infection for humans.
Transmission from cats to humans:
Bites or scratches.
Direct contact with infectious exudates.
Inhalation of infectious respiratory droplets.
Cats that are allowed to roam freely also can become infested with Y. pestis-infected rodent fleas and transport these fleas into home environments.
Cat-associated human cases were first verified in the U.S.
in 1977. Since that time, 25 human cases have been associated
with exposure to Y. pestis - infected cats. Of these,
7 occurred in veterinarians or their assistants, and 5 presented
as primary pneumonic plague, which is a particularly dangerous
form of the disease that can be transmitted from human to human
through coughing and respiratory droplet spread. |
Basic Math Fractions
A response to the question:
I would like suggestions about teaching fractions and division using manipulations to an academically challenging 7th grade self-contained class.
As an introduction to the instruction, you may want to remind your students about what we are trying to find out when we divide. For example, if we were dividing 138 by four, we would be trying to find out how many groups of 4 there are in 138, or how many are in each group, for 4 equal groups, depending on the context of the problem we were doing the arithmetic for. (You have 138 pairs of sock to put in boxes, and four pairs will fit into each box. How many boxes will you need, vs. You have 138 pairs of socks to store in 4 boxes. How many should you put into each box, to evenly arrange them.)
When we divide with fractions we are looking for the same sort of information. 1/2 divided by 1/4 is really asking, "if you have half a pie, how many fourths of a pie do you have." Since 2 fourths is equivalent to one half, the answer is two. It forces us to think about the fractions in relation to the whole.
How about one-fourths divided by one half. That is asking how many halves there are in one fourth. If you draw a whole, and shade a fourth of it, you can see that half of a half is in a fourth. So the answer is one half.
Here is where a pitfall enters. Some teachers have taught their students that when you divide you get a smaller answer. That is true for whole numbers, but not for fractions and decimals necessarily. When students divide one half by one fourth, and get the answer "two" it does not seem reasonable to them. Having them actually cut a half into fourths (fourths of the original whole, that is, not cutting a half into four equal pieces) will help them see what is happening when fractions are divided.-------------
Thank you for your fraction tips.
Now, I also am working with a 7th grade boy who is just this year memorizing most of his multiplication problems. He is working on simple one digit division problems and has alot of difficulty remembering the correct number for the quotient. He gets very frustrated. What do you suggest as the best way to proceed with him. Would it be better to have him work with a calculator or a multiplication chart? Any other suggestions?-------------
Perhaps he really doesn't "see" what he is doing. Have you tried letting him use counters to work out these problems? For example, if the problem is 29 divided by 3, have him take 29 beans, or other materials, and divide them into three groups. Ask him how many are in each group. Then have him divide them into groups of three, and ask him how many groups he made. If you ask him to solve very simple word problems (that could be solved with division) while he uses these manipulations, you will also be helping him figure out a context for this "skill" to be used in. Once he makes the connection that dividing is making groups from a whole set, he may have less trouble figuring out a reasonable estimate, and remembering the correct answer.
On the other hand, if he is having trouble remembering the quotient because he still hasn't mastered the multiplication facts, your strategy of using the chart is a good one. You might want to have him look for patterns in the chart, and then relate what he sees to models using beans or some other manipulations. He might begin with 24, and look for all the ways he can make equal groups. Then he could find these groups on the multiplication chart. Hope this gives you some ideas to start with.
-Gail, for the T2T service
Join a discussion of this topic in T2T.
Home || The Math Library || Quick Reference || Search || Help |
While interacting with many students across different schools, I have noticed that most of the students are really fighting with their study habits. It has been scientifically proved that your study habits, your posture while reading impacts your concentration and understanding capabilities. It is also evident that the routine followed by the students throughout the year is proportional to the final gradings.
No two students can have same grasping power and thus the different students have different levels of understanding of the same subject. The standard study habits of the students help them in getting higher grades. Different students have different learning capabilities and styles which help them in learning new subjects throughout their lives. These learning styles are broadly classified as below:
- Auditory learners: The students who understand the things better by hearing the details come under this category. They listen to the lectures and note down the details as per their understanding
- Visual Learners: This category comprises the students who learn more while viewing the things. The different aids used in this technique of learning are pictures, images or prototypes.
- Tactile learners: This is the category of students where they like to do hands-on to understand the details. These students like to prepare their own models to understand the concepts.
The students are advised to first understand the mode by which they are most benefitted and then try to implement the same to have better learning. However this is not possible for every student to categorize himself in the above categories and hence it’s the responsibility of the parents to identify the strengths of their children and help them in growing in the most comfortable fashion.
Apart from these learning categories, there are different modes of studying which students follow and some of those best strategies are discussed below:
- Time Management: Being in class or at home every student is asked to prepare a timetable. All the schools have their own timetables which help them in covering maximum subjects in the limited times, but most of the times the students fail to manage the timetable at home. Studying is not only about completing your homework. It’s about how you learn different things utilizing the available hours in a day. The tips for time-management are as below:
- Prepare short timetables.
- Decide the time to devote to studying daily.
- Don’t take breaks quickly instead take quick breaks.
- Avoid late night study and better to start your day early.
- Take Notes: Be in your class or studying at home, taking notes is the most reliable way of learning. It helps in your exam preparation and quick revision. Note down the bullet points instead of noting full story and understand the concept to articulate the details in your language. The tips for taking notes are as below:
- Jot down the bullet points
- Use the technical terms while taking notes
- Be precise while taking notes in the class and try to write the important points instead of a complete story
- Rewrite the details at home with the description in your own language but don’t forget to introduce the technical terms wherever necessary
- Mind your posture: Many students like to read the books in the most comfortable possible way and sometimes they like to sit on the lounges or sofas in front of Television sets while studying. This is the incorrect way of reading and all the students should avoid it. They are advised to use table and chair sit straight with both the hands on the table while studying. Keep your back as straight as possible and avoid resting your chin with your palm while reading.
- Quiet study room: Try to make your study area as calm and quiet as possible. The students are advised to avoid studying in the TV rooms or the drawing rooms if the family expects frequent visitors. A room with proper lights and ventilation is always helpful in increasing the interest in studying. A light music is helpful in increasing the concentration but that too for shorter duration. The other tips on study area are as below:
- Avoid Television sets or Playstations from the study area
- The parents should visit the study area regularly and keep an eye on the items used for decorating the area
- Avoid frequent visitors in the study area
- Ask Questions: “No question is silly till it answered”, remember this fact and don’t feel shy in asking any question related to the topic of discussion. However during general discussions the students are advised to frame questions on different topics and discuss it with the teachers and the parents. Asking questions always helps in clearing the doubts and in turn increasing the learning of the students. The questions or queries can also be discussed in group study. Group study with certain limitations is a better option to learn and understand the details
- Revise Regularly: Revision is the best way to keep you updated with your learnings throughout the year. Once you take down the notes, prepare your study materials don’t put them into your cupboard instead revise it regularly. Make a habit of revising different subjects weekly depending on your capability and bandwidth.
- Plan-Do-Check-Act (PDCA): PDCA cycle is mostly used in the software industry but it can be adapted by the students in enhancing their study habits. The tips on using the PDCA cycle are as below:
- Plan: make your objectives and schedules clear and plan accordingly. Plan for a week and try to study as per the schedule.
- Do: Once you have scheduled your week try to study as per your planning. Avoid the distractions as much as possible. Avoid long breaks and unwanted discussions. Manage your other activities as per your study schedule and fill the breaks with some physical activities or with your hobbies.
- Check: Once you are done with your weekly plan, evaluate the results. Think about the gaps and distractions during your weekly schedule and plan the next week keeping in mind the details of the last week.
- Act: Compare the results. Use your evaluation capability to compare the differences between the planned schedule and the schedule you followed. Try to fill the gaps, if any and stick to the plan if you are on the right track. Remember to plan the week as per your strength and don’t over or underestimate yourself.
- Maintain your health: Great minds reside in healthy bodies. Your learning capability depends on how healthy you are. The healthy body doesn’t resemble only the physically fit body but a person is healthy if his immunity power is higher. The tips to maintain your help are:
- Eat good food, preferably home-made. Avoid junk foods as much as possible.
- Take sound sleep, preferably 8 hours (10pm to 6am)
- Avoid late night sleep or late night study
- Exercise regularly, a mild exercise, brisk walking or jogging, yoga etc. helps in increasing the concentrations. Growing students are not allowed to do heavy exercises and without guidance
So far we have learnt different techniques which can help the students in developing good study habits. Follow the same and see the difference. |
The LIGO detector has now seen at least two black hole mergers. The second merger it spotted was about what we would expect given a binary system of two massive stars. Both explode, leaving black holes behind that are just a bit more massive than the Sun; these later go on to merge.
But the first merger detected by LIGO was something rather unusual given that both black holes were around 30 times the Sun's mass. So far, we have not observed anything that could produce black holes in that mass range. Now, a new modeling study suggests that mergers with these sorts of masses might be common—but only if stars can collapse directly into a black hole without exploding first. This situation would require some of the Universe's most luminous stars to simply be winking out of existence.
The black holes involved in these mergers almost certainly began their existence as binary star systems. So in the new study, the authors performed a massive number of simulations of these systems using a modeling package called StarTrack. The simulations took into account the different amount of heavy elements present at different times in the Universe's existence—there are 32 different levels of heavy elements, and the team ran 20 million simulations at each of them. The simulations also took into account various models of the collapse of massive stars, as well as whether the process generated an asymmetrical force that could kick the resulting black hole into an energetic orbit.
With the simulations run, the authors could sift through the results and look for systems that produced the sorts of heavy black holes that LIGO detected merging. They could then play back the simulation and examine the process that produced the black holes in the first place.
The models indicate that the systems that produce LIGO-like mergers started out as giant stars with very few heavier elements. Giant stars have only 10 percent of the metal levels found in the Sun, but they're somewhere between 40 to 100 times more massive. These sorts of stars were much more common in the early Universe, and 75 percent of the simulations indicate that the binary system formed within the first two billion years of the Universe's existence.
In the simulations, helium and heavier elements form quickly at the core of these stars. The core then ejects its outer layer of hydrogen. This action creates a pair of Wolf-Rayet stars, extremely bright and compact objects.
At this point, one of the two stars does something unusual: it collapses directly into a black hole without exploding in a supernova. While there has been a lot of theoretical work that indicates it should be possible, we've never actually observed a star collapsing out of existence.
Assuming it happens, the resulting black hole should be in the right mass range to produce a LIGO-like merger. But the merger doesn't happen immediately (indeed, it may take 10 billion years after the birth of the stars). Because of the lack of explosion, the black hole would continue to orbit close to its companion. Over time, it would help draw out the outer layers of the companion star, creating a situation where the black hole's orbit would be inside the envelope of the star.
After the star contracts, the result would look like what we call X-ray binaries: an X-ray source orbiting a massive, luminous star. The authors point out that we've observed two systems that look like the modeled results (IC10 X-1 and NGC 300 X-1). Given a bit more time, however, the second star also undergoes a direct collapse, creating a pair of orbiting black holes, each about 30 times as massive as the Sun. These black holes then take about 5 billion years to spiral into each other and merge.
Similar binary systems are still forming, so the simulations also indicated that there was a 25-percent chance that an accelerated version of this process might have started within the last two billion years.
But really, the most notable thing about this process is the fact that it really only works through the direct collapse of stars. "A striking ramification of this is the prediction that hot and luminous Wolf–Rayet progenitors of massive black holes should disappear from the sky as a result of direct collapse to a black hole (that is, with no supernova explosion)," the authors write. Since these are the most luminous stars in the sky, that event should be easy to spot; the authors note that observational campaigns intended to do just that are already underway.
Until we observe this process happening, these simulations should be viewed as pretty tentative. They provide a possible explanation for one of the puzzling aspects of the LIGO observation, but the main appeal is that most of the other explanations are even less physically plausible. |
The Great Depression was a global phenomenon: every economy linked to international financial and commodity markets suffered. The aim of this book is not merely to show that China could not escape the consequences of drastic declines in financial flows and trade but also to offer a new perspective for understanding modern Chinese history. The Great Depression was a watershed in modern China. China was the only country on the silver standard in an international monetary system dominated by the gold standard.
Fluctuations in international silver prices undermined China’s monetary system and destabilized its economy. In response to severe deflation, the state shifted its position toward the market from laissez-faire to committed intervention. Establishing a new monetary system, with a different foreign-exchange standard, required deliberate government management; ultimately the process of economic recovery and monetary change politicized the entire Chinese economy. By analyzing the impact of the slump and the process of recovery, this book examines the transformation of state–market relations in light of the linkages between the Chinese and the world economy. |
The Consequences of the Halting ProblemSo what does it mean that there is no program that can determine whether any given program will halt on a given input? Perhaps surprisingly, quite a bit. In fact, there are quite a few examples that are incredibly surprising -- for instance, it's not possible to write a program that can determine whether any given program will output its own binary code (i.e., the problem of detecting viruses is theoretically undecidable).
NOTE: For readers just joining us, you should be familiar with the halting problem to follow the arguments in this discussion.
How can this be? Well, remember that if we can determine whether or not a particular condition is true, we must be able to do so for every possible program. Now, what if we decide to put whatever program is in question as the first part of a particular program; for instance, say we had a program that could tell whether a particular program output "Hello world"; we'll call it DETECT-HELLO. DETECT-HELLO takes one argument, a program.
Now, let's build a program that tests whether some program we're interested in will halt on a given input: first, we'll run the program in question on that input, and then our new program will output "Hello world". Of course, if the program we're testing never halts, then the program we built will never output "Hello world".
So, to determine if the program in question halts on the input we've given it, all we need to do is run DETECT-HELLO on the program that we have constructed. In effect, DETECT-HELLO uses the output of the program we constructed as a flag for whether or not the program being tested will actually halt.
This approach is called a reduction because the goal is to show how solving the halting problem could be reduced to solving another type of problem. Using reductions, you can show that it is impossible to write programs to do all sorts of apparently simple things. In many cases, the problem is determining a YES/NO answer -- for instance, DETECT-HELLO can always tell us when "Hello world" will be output. The problem is that it has trouble determining if it won't ever be output.
Of course, in some cases, we can certainly prove that a given program will not halt, or that a given program cannot output "Hello world". But that's not really the point of all of this theory -- the point is that no general method, no algorithm, can be devised that will work in all cases. It's a bit like the difference between knowing the times tables and being able to multiply any two numbers -- in one case, you can answer questions about a few different inputs (5 times 6 or 7 times 8) but you won't be able to multiply everything (123 times 542). |
According to a well-known theory in quantum physics, a particle’s behavior changes depending on whether there is an observer or not. It basically suggests that reality is a kind of illusion and exists only when we are looking at it. Numerous quantum experiments were conducted in the past and showed that this indeed might be the case.
Now, physicists at the Australian National University have found further evidence for the illusory nature of reality. They recreated the John Wheeler’s delayed-choice experiment and confirmed that reality doesn’t exist until it is measured, at least on the atomic scale.
Some particles, such as photons or electrons, can behave both as particles and as waves.Here comes a question of what exactly makes a photon or an electron act either as a particle or a wave. This is what Wheeler’s experiment asks: at what point does an object ‘decide’?
The results of the Australian scientists’ experiment, which were published in the journal Nature Physics, show that this choice is determined by the way the object is measured, which is in accordance with what quantum theory predicts.
“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said lead researcher Dr. Andrew Truscott in a press release.
The original version of John Wheeler’s experiment proposed in 1978 involved light beams being bounced by mirrors. However, it was difficult to implement it and get any conclusive results due to the level of technological progress back then. Now, it became possible to successfully recreate the experiment by using helium atoms scattered by laser light.
Dr. Truscott’s team forced a hundred of helium atoms into a state of matter called Bose-Einstein condensate. After this, they ejected all the atoms until there was only one left.
Continue reading here.
378total visits,1visits today |
Blair Webber & Molly Pennington
- Establishment of more asylums
- Improvement of existing asylums
- Re-education of guards to humanly treat patients
- Abolishment of criminal punishment for patients: to go to safe asylums rather than prisons
Reasons for reform
- People suffering from mental disabilities and illnesses are being treated as criminals
- Incapable of understanding their mistakes; should be treated differently than someone who is in control of their mind and body
- General lack of compassion of those who can't help themselves
- Wardens and guards were not trained to handle patients humanely.
"... I come as the advocate of helpless, forgotten, insane men and women; of beings sunk to a condition from which the unconcerned world would start with real horror"
Dorothea Dix was the spokesperson and leader of the reforms on Asylums.
When visiting jails, Dix witnessed a woman being forcefully tied to a bed in Burlington who was only allowed to rise on occasion, typically every other day, in order to avoid trouble for the keepers of the prison. It was common for her to witness chaining and caging of those who were insane. As well, she witnessed in Bridgewater three insane people put in a room without being able to leave.
- Improved existing facilities in Rhode Island and New York as well as establishing more hospitals in many other states.
- Persuaded the Massachusetts legislature by writing a strongly worded letter concerning the mistreatment of the mentally insane in prisons.
- Creation of some new hospitals
- Reforms of existing ones
- Change in sentencing- mentally ill are no longer prosecuted as criminals
- Moderate movement: This was a situation that any human being could tell needed to be fixed; there wasn't a party or group opposed to the fair treatment of mentally ill people.
Memorial to the Legislature of Massachusetts, Dorothea Dix. 1843
Prentice Hall Classics (Text book)
Nation of Nations: A Narrative History of the American Republic (Text book) |
Drought impact in Ethiopia: mitigation through children education
Nearly 6 million school children’s education could be affected by Ethiopia’s worst drought in 50 years. The root cause of school dropout is food and water scarcity. If food and water is restored to schools, children will return. 20 NGOs are ready to scale up and support the Government of Ethiopia’s education in emergency response but, to date, dedicated funding from Ethiopia’s Humanitarian Response Fund has not been allocated to NGO education partners.
The key impacts of Ethiopia’s worst drought – food and water scarcity, devastation of livelihoods, forced migration and increased child malnutrition – are having significant knock-on effects for children’s education, just at a time when Ethiopia has made significant strides in improving children’s access to a quality and inclusive education.
Across the drought affected areas, children are no longer attending school on a regular basis, if not dropping out altogether, as a consequence of the drought. While children are out of school for a myriad of reasons, two common factors – a chronic lack of food and water – are pushing children away from their classrooms and placing their protection and development at risk.
In times of emergencies, particularly during drought, schools can provide a platform for an integrated emergency response for children. When children are in school, they can access food, safe water, sanitation, nutrition and psychosocial support while at the same time be protected and continue to learn and develop.
In the long term, if children continue to access an education during times of drought, particularly one that incorporates disaster preparedness and adaption, their chances of reaching their potential and their capacity to cope and adapt when faced with future droughts increases. This will have positive impacts for their communities and Ethiopia’s overall social and economic prospects.
Yet the power of schools to sustain and protect Ethiopia’s children during the current drought crisis and the role it can play in mitigating the impact of cyclical droughts on children’s long-term development remains to be realised. |
Reading ability is tested throughout school years and was commonly recorded in terms of reading age, although it is more usual now to express competence in reading in terms of readability levels. The Rose Report (2006) identifies five competencies in a beginner reader's ‘toolkit’ which they will need if they are to be able to read:
Schools in England have traditionally used a variety of approaches and reading schemes (these included among others Dick and Jane, Janet and John, Initial Teaching Alphabet (ITA), Pirates, and the Oxford Reading Tree) to reinforce word recognition in familiar and sequential texts. More recently this has been succeeded by a ‘real books’ approach, where children could choose their own books from classified sets, and vocabulary would vary from simple to complex and small number to greater range. The disadvantage of this approach was that children had to learn new vocabulary with each chosen book and this could place some children at a distinct disadvantage if book‐reading was not a common activity at home.
With the onset of the National Literacy Strategy in 1997, a more systematic literacy hour was adopted in primary schools, introducing a phonics approach to the learning of reading, together with individual and shared reading, the use of a Big Book for all children to look at, and diverse activities to reinforce phoneme and word recognition.
The Rose Report suggests that ‘there is much convincing evidence to show from the practice observed that, as generally understood, “synthetic” phonics is the form of systematic phonic work that offers the vast majority of beginners the best route to becoming skilled readers’ (para. 47). Under current government policy, the use of synthetic phonics is the preferred approach to the teaching of reading in Key Stage 1.
Department for Education and Skills Independent Review of the Teaching of Early Reading (DfES, 2006).
http://www.standards.dfes.gov.uk/rosereview/ Full summary of the Rose Report. |
Fuel Gas Dehumidifiers / Driers
A fuel gas dehumidifier or gas drier is an ancillary system to a gas engine that is used to remove moisture from the fuel gas for the cogeneration system (dessication). The removal of water from the fuel gas helps to optimise the combustion process in the engine, prevents condensation and helps protect the engine from the acid.
The drying of the gas is achieved by cooling the raw incoming gas to a temperature below the dew point temperature in two heat exchangers. This causes the moisture to be removed as condensate prior to reheating the gas as it leaves the dehumidifier. The added benefit of this system is that some sulphur is removed from the gas in the condensate, helping improve the overall gas quality.
- Protects engine
- Removes moisture from fuel gas
- Reduces in sulphur from fuel gas
- Reduces engine maintenance requirements
- Improves oil life
Factors to consider
Fuel gas dehumidifiers are specified based upon the following key factors:
- Gas type
- Gas flow rate
- Gas methane concentration
- Raw gas temperature
- Gas contaminants
- Permissible pressure loss
Different gas dehumidifiers may be supplied depending upon which type of fuel gas is being dried (e.g. is it biogas, landfill gas or other). The gas flow rate and gas methane concentration will determine how much gas the dehumidifier must be able to process. The raw gas temperature at the inlet to the dehumidifier will determine how large the cooling capacity of the dehumidifier will be. Contaminants in the fuel gas may produce undesirable effects for the unit and should be checked that they are within acceptable limits. Finally permissible levels of pressure loss must be considered as gas holders usually exist at low pressures and there is also a loss in pressure across the dehumidifer. It is undesireable to suck gas through pipes under negative pressure.
Fuel Gas Quality
There are specific requirements for the fuel quality for GE Jenbacher gas engines laid of in the GE Jenbacher technical instruction document (please contact your local Clarke Energy office to receive a copy). Moisture levels in the gas must be kept to a minimum in order to protect the engine and to optimise combustion. High moisture is a particular characteristic of biologically derived gases and coal gas.
Clarke Energy are able to supply high-quality gas dehumidifiers within a turn-key scope of supply, or free issue with the generator. We do not supply dehumidifiers outside of the scope of a GE Jenbacher gas engine or combined heat and power installation.
Please contact your local Clarke Energy office to find out more information. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.