text
stringlengths 198
621k
| id
stringlengths 47
47
| dump
stringclasses 95
values | url
stringlengths 15
1.73k
| file_path
stringlengths 110
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 49
160k
| score
float64 2.52
5.03
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Study: Gut bacteria tied to weight gain / Thin mice got fat when given bugs from the digestive tracts of obese mice
Published 4:00 am, Thursday, December 21, 2006
The guts of obese people are teeming with a distinctive mix of bacteria that seems to make them prone to gaining weight, a startling discovery that could lead to new ways to fight the obesity epidemic, researchers are reporting today.
Obese people have more gut microbes that are especially efficient at extracting calories from food, the researchers said, and the proportion of these super-digesting organisms declines as they lose weight.
When the scientists transplanted gut bugs from obese mice into lean mice, the thin animals start getting fat, providing more support for the theory that the bacteria that populate the gut play an important role in regulating weight.
"There appears to be a link between obesity and the type of bugs in your gut," said Jeffrey Gordon of Washington University School of Medicine in St. Louis, who led the series of experiments being published in today's issue of the journal Nature.
Gordon and his colleagues stressed that more work is needed to explore the findings. But if the findings are confirmed and better understood, they could lead to insights into one of the world's biggest health problems, they said.
"In the future, we could potentially manipulate the structure and function of these microbial societies as a new approach toward preventing and treating obesity," Gordon said.
The findings produced enthusiasm and caution from other researchers.
"This is very exciting," said Barbara Corkey at Boston University. "We don't know why the obesity epidemic is happening. People say it's because of gluttony and sloth. I think there must be something else. It's exciting to see some work being done on alternative explanations."
Others suspect that if gut microbes do play a role, it's probably relatively minor.
"This is extremely interesting," said Hans-Rudolf Berthoud of the Pennington Biomedical Research Center in Baton Rouge, La. "But lifestyle and the environment are still the major factors in the obesity epidemic."
The researchers themselves cautioned against trying to manipulate "gut flora" with antibiotics or microbe-containing "probiotic" pills sold in health food stores.
Scientists have long known that the human body is crawling with germs, primarily bacteria, which cover the skin and inhabit every orifice. By some estimates, only 1 out of every 10 cells in the human body is actually human. These organisms perform a host of functions, especially in the gut, where they help digest food.
To explore the role of the organisms in weight regulation, Gordon's team first compared the gut flora of 12 obese people to lean subjects. The obese tended to have significantly greater proportion of one of the two main types of bacteria found in the gut, known as Firmicutes, than the other, known as Bacteroidetes.
Next, the researchers spent a year meticulously measuring the gut flora of the obese volunteers as they tried to lose weight by eating low-calorie diets that restricted either their fat or carbohydrates. As they lost weight, the proportion of Firmicutes fell and the proportion of Bacteroidetes rose, the researchers found.
When the researchers conducted detailed molecular analyses of the two types of bacteria in the laboratory, they discovered the Firmicutes were much better at extracting calories from food.
And when they transferred gut flora from obese mice to sterile mice devoid of gut flora, the recipient animals tended to gain weight, confirming that pattern was associated with weight gain.
"For the first time, we see that there is a correlation between the microbial gut ecology and the obese state," Gordon said.
The researchers acknowledged that the difference in the number of calories extracted by the microbes is relatively small. But over time even a small differential could be significant, they said.
Many questions remain, however. It's unclear what determines the makeup of a person's gut flora -- it might be the microbes they pick up from their mothers; it might be their exposure to antibiotics. It's also unclear how fat tissue and gut flora might communicate, and whether the change in gut bacteria causes or is a result of the weight loss.
Despite those and other questions, scientists said, the findings are sure to inspire more investigation. | <urn:uuid:66b7814b-ce9c-40b0-a1b6-4e4c7d69b18e> | CC-MAIN-2016-22 | http://www.sfgate.com/health/article/Study-Gut-bacteria-tied-to-weight-gain-Thin-2542277.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051165777.47/warc/CC-MAIN-20160524005245-00193-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.956255 | 874 | 3.125 | 3 |
Sometimes ignoring minor misbehavior will result in the lack of attention towards it causing the behavior to be abandoned. Be careful though that it doesn’t backfire and become a larger problem.
Active Listening and “I” Messages
Respond to feelings as well as words. Repeat what the child says and offer nonjudgmental statements from a personal perspective.
Inner satisfaction can be a result of positive reinforcement. Attention, hugs, smiles, etc. can be motivators. Avoid tokens or gold stars. Discipline, not punishment, may be a form of negative reinforcement. Positive reinforcement develops strong self-esteem and a willingness to continue the desired behavior and look for more opportunities to do well.
Redirecting the Activity
Discover what it is that the children really want to do and find an acceptable way of helping them to this goal. Throwing rocks can be redirected into throwing balls, stomping can be redirected into marching and dancing.
Be prepared for the child’s response and its consequences. Help the child to fully understand the details of the choice. Make the choice reasonable but give the child the chance to feel that they are an active participant in the choice.
Children must know their boundaries in the school setting so that they feel secure in their safety, physically, socially and emotionally.
Active Problem Solving
Offer open-ended questions to start a dialogue for possible solutions. Avoid blame. Think through the solutions. Intervene only as necessary; let the children do it on their own.
Often appropriate with younger children. Must be quick!
Natural and Logical Consequences
Taking responsibility for actions results in consequences be they good or bad. Learning about consequences lets the adult remain neutral and the actions speak for themselves. Logical consequences are dependent on adult follow-through based on a decision.
Be careful not to impose a sense of rejection. Use a time out as a cooling off period so that the child can calm down and respond in appropriate ways to a situation.
By: Dianna Dammir | <urn:uuid:26c15560-1ce2-4055-a7eb-41ffa32205eb> | CC-MAIN-2017-34 | http://empowerkidsforlife.blogspot.com/2009/11/ten-essential-guiding-strategies.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00696.warc.gz | en | 0.925212 | 417 | 3.796875 | 4 |
A charabanc or "char-à-banc" // (often pronounced "sharra-bang" in colloquial British English) is a type of horse-drawn vehicle or early motor coach, usually open-topped, common in Britain during the early part of the 20th century. It has "benched seats arranged in rows, looking forward, commonly used for large parties, whether as public conveyances or for excursions." It was especially popular for sight-seeing or "works outings" to the country or the seaside, organised by businesses once a year. The name derives from the French char à bancs ("carriage with wooden benches"), the vehicle having originated in France in the early 19th century.
Although the vehicle has not been common on the roads since the 1920s, a few signs survive from the era; a notable example at Wookey Hole in Somerset warns that the road to the neighbouring village of Easton is unsuitable for charabancs. The word 'charabanc' was in common usage until the middle of the 20th century but was deleted as obsolete from the pocket edition of the Collins Dictionary in 2011. The word is in common usage especially in Northern England in a jocular way referring to works outings by coach.
In Australia a modern similar type of bus or motorcoach, with a lateral door for each row of seats, survived up to the 1970s and was referred to as side loader bus; but all or most of them were not open-topped. One Bus based in Echuca Victoria has been restored and is used at the Port of Echuca on some public holidays and special events.
Buses with similar arrangement of doors and seats are a common equipment of the antiriot squads of the Police of many countries, reason being the easiness for the crew quickly getting off the vehicle.
|This section needs additional citations for verification. (July 2012) (Learn how and when to remove this template message)|
Introduced in the 1840s as a French sporting vehicle, the char à bancs was popular at race meetings and for hunting or shooting parties. It could be pulled by a four-in-hand team of horses or a pair in pole gear. It had two or more rows of crosswise bench seats, plus a slightly lower rear seat for a groom, and most also had a slatted trunk for luggage. Initially used by the wealthy, they were later enlarged with more seats for school or works excursions and tourist transport, as a cheaper version of the tourist coach. The first charabanc in Britain was presented to Queen Victoria by Louis Philippe of France and is preserved in the Royal Mews.
Before the First World War, motor charabancs were used mainly for day trips, as they were not comfortable enough for longer journeys, and were largely replaced by motor buses in the 1920s.
The charabanc of the 1920s tended to last only a few years. It was normal at the time for the body to be built separately to the motor chassis and some were fitted in summer only; a second goods body would be fitted in its place in winter to keep the vehicle occupied.
Charabancs were normally open top, with a large canvas folding hood stowed at the rear in case of rain, much like a convertible motor car. If rain started, this had to be pulled into position, a very heavy task, and it was considered honourable for the male members of the touring party to assist in getting it into position. The side windows would be of mica (a thin layer of quartz-like stone).
The charabanc offered little or no protection to the passengers in the event of an overturning accident, they had a high centre of gravity when loaded (and particularly if overloaded), and they often traversed the steep and winding roads leading to the coastal villages popular with tourists. These factors led to fatal accidents which contributed to their early demise.
In Northern England
Factory day outings (annual works trips) in the 19th and early 20th century were quite common for workers, especially for those from the northern weaving mill towns of Lancashire and Yorkshire during the wakes weeks. The 1940s and 1950s were relatively hard times due to national recovery being slow after the Second World War; rationing was still evident and annual holidays had not really become established for poorer workers such as weavers and spinners, so a day's outing to the seaside was a rare treat and all that some workers with large families could afford. "Charabanc trips" were usually only for adults, again due to finance. Occasionally the mill owner would help to pay for these outings, but this was not always the case.
The charabancs, or coaches, were pretty basic vehicles; noisy, uncomfortable and often poorly upholstered with low-backed seats and used mainly for short journeys to the nearest resort town or the races. Some working men's clubs also organised days out and these trips were often subsidised by the clubs themselves from membership subscriptions that had been paid throughout the year. A few pence a week would be paid to a club or mill trip organiser and marked down in a notebook. This would be paid out to the saver on the day of the trip as spending money on the day. This day out would often be the highlight of the year for some workers and the only chance to get away from the smog and grime of the busy mill towns.
Later, in the late 1960s and 1970s, as the mills prospered and things improved financially, the annual "wakes week" took over and a one-week mass exodus from northern mill towns during the summer months took precedence over the charabanc trips, and a full week's holiday at a holiday camp or in a seaside boarding house for the full family became the norm, instead of a single day out.
The charabanc is notably mentioned in Dylan Thomas's short story "A Story", also known as "The Outing". In this piece the young Thomas unintentionally finds himself on the annual men's charabanc outing to Porthcawl. Within the story the charabanc is referred to as a 'chara' in colloquial Welsh English.
A char-a-banc also figures prominently in Rudyard Kipling's short story "The Village that Voted the Earth Was Flat"
Char-a-bancs are mentioned in Dorothy Edwards' book The Witches and the Grinnygog in the chapter entitled "Mrs. Umphrey's Ghost Story". In it, Mrs. Umphrey tries to reassure the ghost of Margaret that the char-a-bancs are not the chariots of devils.
"Peaches", a single by the Stranglers makes reference to a charabanc, with vocalist Hugh Cornwell explaining to the listener how he will be stuck on a beach "the whole summer" after missing a charabanc.
In Agatha Christie's "The Dead Harlequin", from The Mysterious Mr Quin series, the young artist Frank Bristow reacts angrily to the older Colonel Monkton's dismissive (and presumably snobbish) attitude towards charabancs and their use in tourism. They are also mentioned in the story "Double Sin" when the motor coach Poirot and Hastings are traveling on stops for lunch at Monkhampton. "...in a big courtyard, about twenty char-a-bancs were parked--char-a-bancs which had come from all over the country."
Charabancs appeared several times in John Le Carre's The Little Drummer Girl.
- "char-à-banc". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.)
- Chisholm, Hugh, ed. (1911). "Char-à-banc". Encyclopædia Britannica. 5 (11th ed.). Cambridge University Press. p. 855.
- World Wide Words: Charabanc
- charabanc - Britannica Online Encyclopedia
- Flickr (2008-04-28). "This road is not suitable for charabancs". Retrieved 2008-10-25.
- Bates, Claire (22 August 2011). "Use them or lose them! Aerodrome and charabanc become obsolete as they are scrapped from dictionaries". The Daily mail. Associated Newspapers Ltd. Retrieved 3 December 2013.
- Smith, D. J. (1994). Discovering Horse-drawn Vehicles. Osprey Publishing. pp. 85–86. ISBN 0-7478-0208-4.
- The Collected Stories, by Dylan Thomas. New Directions Publishing, 1984.
- Stage adaptation of Cider with Rosie Archived May 17, 2010, at the Wayback Machine. at the Theatre Royal, Bury St Edmunds.
- Vince Hill replies to correspondence[permanent dead link]
- All Music biography of Vince Hill, by Dave Thompson
- "Archived copy". Archived from the original on 2013-08-18. Retrieved 2014-01-07. | <urn:uuid:38592740-252a-4969-8bbe-c7b26b7ac69a> | CC-MAIN-2017-13 | https://en.wikipedia.org/wiki/Charabanc | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187113.46/warc/CC-MAIN-20170322212947-00238-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.967305 | 1,894 | 2.671875 | 3 |
Too much online presence can have a negative impact: says study
A study published by the journal of consumer research cited that increased internet usage is leading to people overexposing themselves in public. With the avalanche of various blogging sites, video sharing mediums, and mobile phone chat apps, people are now sharing more and more information about themselves.
The study noted that most people tend to lose their inhibitions when using such digital platforms and they can easily promote their self-image and can share their personal thoughts and opinions online. Russell W Belk (York University) writes, “Sharing itself is not new, but consumers now have unlimited opportunities to share their thoughts, opinions, and photos, or otherwise promote themselves and their self-image online. Digital devices help us share more, and more broadly, then ever before.”
Blogging platforms generally encourage people to write about their innermost thoughts and opinions and video platform giant YouTube’s slogan is Broadcast Yourself. Social media sites like Facebook ask ‘Whats on your mind?’ Then there are book clubs online such as Good Reads where users can rate books, and other forums and websites such as Amazon, Yelp, or IMDb where users can rate movies restaurants and various other topics. Most mobile apps nowadays advertise sharing a necessary feature to gain popularity with the masses. The reason for this being, that since most people are not as comfortable with talking about themselves face-to-face they are more open to the idea of digital sharing of information.
However, these sharing platforms may encourage people to share and represent themselves online, it might not always have a positive effect. This is because thousands of people are able to access someone’s private information online. This can have a negative impact in future when it comes to job applications, promotions, and relationships.
The author of the study is of the opinion that increased exposure online, has some risks too. ”Due to an online dis-inhibition effect and a tendency to confess to far more shortcomings and errors than they would divulge face-to-face, consumers seem to disclose more and may wind up ‘over-sharing’ through digital media to their eventual regret,” said Belk.
- Wearables To Soon Witness Mass Market Adoption
- Essentials of an effective social media policy
- Is Facebook Looking To Threaten Google's Search Business?
- Social Media Users Don’t Share Opinion, Says Study
- IoT Set To Grow Despite Consumer Ignorance
- Most Smartphone Users Download Zero Apps Per Month
- Twitter, Google, FB Rank High On Culture And Values
- TaxiForSure Gets New Round Of Funding From Accel
- The Man Behind Facebook Timeline Quits
- In Workforce Diversity, Apple Is As 'Good' As Its Peers | <urn:uuid:8b0a6836-485b-4cc2-9490-50a471c24f01> | CC-MAIN-2014-41 | http://www.cxotoday.com/story/increased-digital-communication-can-have-negative-impact-says-study/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00097-ip-10-234-18-248.ec2.internal.warc.gz | en | 0.937085 | 575 | 2.9375 | 3 |
In English, many things are named after a particular country – but have you ever wondered what those things are called in those countries?
1.1(speed) velocidad(rhythm) ritmo masculinorate of flow — ritmo de flujo masculino
- rate of climb — velocidad de ascensión / de subida
- their vocabulary increases at a rate of five words a day — su vocabulario aumenta a razón de cinco palabras por día
- I'm reading at a rate of 100 pages a day — estoy leyendo a un ritmo de 100 páginas por día
- the runners set off at a tremendous rate — los corredores salieron a una velocidad vertiginosa
- at this rate or at the rate we're going, it'll take weeks — a este paso / al ritmo que vamos, nos va a llevar semanas
- To calculate your maximum heart rate, subtract your age from 220.
- It measures the rate at which small disturbances explode exponentially in time.
- The gates take a relatively long time to close, so if the person before you moves at a normal rate, you should be able to go in with him/her.
- The speed of silicon-based processors is limited by the rate at which electrons move round circuits.
- It is harder to attack a convoy, however, if it is moving at a high rate of speed.
- Near the sun you would increase speed at the rate of 600 mph each second, but you would feel no force acting upon you.
- He added that the streets were not packed with people and the march did not move at a constant rate.
- But we are really moving at an incredible rate to get medicines to the hospitals.
- Furthermore, the epicycle does not move at a uniform rate with respect to the centre of the deferent or the Earth.
- Oh who am I kidding, the thought of riding wasn't the only thing that was causing my heart rate to speed up.
- Flooding significantly enhanced the rate of photosynthesis at all light levels in both populations.
- One of the principal parameters is the clock speed, the processing rate of the main processor.
- But police say it was traveling at a high rate of speed when the accident happened.
- Time is what measures the rate at which everything else changes.
- Everyone has at some point noticed how people talk at drastically varying rates of speed.
- As the officer was about to go after the cars, three more vehicles rounded the curve at a similar rate of speed.
- But their career may not move at the same rate or in the same direction as they first intended.
- As I headed back to my car, a white van passed me at an extraordinary rate of speed.
- They try to judge their speed with its rate of descent, and mistakes happen.
- Because of the moderate rate of speed, the bicyclist also wants and needs many miles of trails.
1.2(level, ratio)birth rate — índice de natalidad masculino
- suicide rate — porcentaje de suicidios
- literacy rate — nivel de alfabetización
- rate of inflation — tasa de inflación
- rate of interest — tipo de interés
- rate of exchange — tipo de cambio
- our campaign has had a high rate of success — nuestra campaña ha tenido mucho éxito
- the drop-out rate in schools — la tasa de deserción escolar
- the failure rate in this exam is too high — hay un porcentaje demasiado alto de suspensos en este examen
1.3(price, charge)postal rates — tarifas postales femenino
- peak/standard rate — tarifa alta/normal
- [ S ]private tuition, reasonable rates — clases particulares, precios módicos
- the work is paid at a rate of $20 per hour — el trabajo se paga a (razón de) 20 dólares por hora
- it is paid at an hourly rate of … — la hora se paga a …
2(formerly in UK: local tax)contribución femeninocontribución municipal femeninocontribución inmobiliaria femeninowater rates — cuota que se paga por el servicio de agua corriente
- Local government did tax directly; its revenue came from rates collected on land.
- Local government gained its revenue from rates, a tax on land.
- We council tax payers pay rates to Central Government, which later gives money to the council to pay for such expenses.
- Businesses often question what they get in return for paying local authority rates.
- Remember, it is our money, directly as taxes and rates or indirectly as rent, that pays for council services.
1.1(rank, consider)I rate her work very highly — tengo una excelente opinión de su trabajo
- how would you rate the book? — ¿qué opinión te merece el libro?
- to rate sb/sth as sth
- I rate her as the best woman tennis player — yo la considero la mejor tenista
- how do you rate the film on a scale of 1 to 10? — ¿qué puntaje le darías a la película en una escala del 1 al 10?
- rated speed — velocidad nominal
- rated power — potencia nominal
1.2British informal (consider good)I don't rate her chances — no creo que tenga muchas posibilidades
2(deserve)merecerI don't think this essay rates an A — no creo que este trabajo merezca una A
- his death rated barely a line in the paper — el periódico apenas dedicó una línea a la noticia de su muerte
- it didn't rate a mention — no les pareció digno de mención
- He barely rates a mention, naturally, and when he is mentioned he is sneered at.
- Nine's ratings problems and management changes barely rated a mention around the market.
- By the benchmark of the Rwandan civil war, it would barely rate a mention.
1(be classed)to rate as sth — estar considerado como algo
- he rates as one of the world's top swimmers — está considerado como uno de los mejores nadadores del mundo
- Elvis Presley came second, and Unchained Melody, by various artists, also rated highly.
- Mr Ahern said that Lissadell House is considered of national importance and is so rated in the national inventory of architectural heritage.
- How the schools rated was a key consideration for Greg Turner when he began his full-time MBA at Manchester Business School last year.
- Environmental quality rated considerably ahead of CEO preference - frequently alluded to as a key location factor for high tech companies.
- So how do election counts rate in terms of viewer involvement?
- A vegetable doesn't have to be high on all counts to be worth growing, especially if it rates better than the cultivar you have been putting in for years.
- Younis Khan, another young talent rated very highly in his country did his bit at one end.
- Neither of us seems to be very sure just how safe blogs are as statements of personal opinion, whether they rate as a public diary or as a written statement of fact.
2(measure up)to rate with sb
- Florida doesn't rate with me — para mí Florida no vale gran cosa
English has borrowed many of the following foreign expressions of parting, so you’ve probably encountered some of these ways to say goodbye in other languages.
Many words formed by the addition of the suffix –ster are now obsolete - which ones are due a resurgence?
As their breed names often attest, dogs are a truly international bunch. Let’s take a look at 12 different dog breed names and their backstories. | <urn:uuid:5e5516d0-1fdb-4f04-8df4-68bfee43e0eb> | CC-MAIN-2018-30 | https://es.oxforddictionaries.com/traducir/ingles-espanol/rate | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00245.warc.gz | en | 0.851881 | 1,792 | 3.171875 | 3 |
How much museumgoers know about art makes little difference in how they engage with exhibits, according to a study by a German cultural scholar who used electronics to measure which items caught visitors’ attention and how they were emotionally affected. The scholar, Martin Tröndle, also found that solitary visitors typically spent more time looking at art and that they experienced more emotions.
Mr. Tröndle and his team of researchers outfitted 576 volunteers drawn from adult museum visitors with a glove equipped with GPS function to track their movement through the galleries of Kunstmuseum St. Gallen in Switzerland for two months beginning in June 2009.
The gloves contained sensors that could measure physical evidence of emotional reactions, like heartbeat rates and sweat on their palms. When the volunteers left the galleries, they were asked follow-up questions about where they had spent the most time, about particular works they had gravitated toward and about the feelings these works evoked.
Among Mr. Tröndle’s more surprising conclusions was that there appeared to be little difference in engagement between visitors with a proficient knowledge of art and “people who are engineers and dentists,” he said, adding that artists, critics and museum directors often walk into the middle of an exhibition space, scan it and then maybe look at one work before continuing on, while visitors with moderate curiosity and interest tend to move diligently from work to work and read text panels.
“We could almost say that knowledge is making you ignorant,” he said.
The Kunstmuseum St. Gallen is a medium-size institution whose collection includes a range of paintings and sculptures dating from the Middle Ages to the present. Its manageable size and variety of artwork proved ideal for Mr. Tröndle and his team of some 20 researchers from diverse fields like psychiatry, art sociology, cultural studies and fine arts. Participating visitors were assigned a number and were asked basic questions before entering the galleries, “about their profession, their education, if they recognized certain artists, styles and artworks, and whether or not they worked in the art industry,” said Mr. Tröndle.Continue reading the main story
When Mr. Tröndle first approached museum administrators about the study, he said he encountered considerable resistance. “My visitors are not white mice,” Mr. Tröndle said one museum director told him. Another, he said, scoffed, “Museums are the last mystical place in society,” adding that he would never allow his to be turned into a scientific laboratory.
Mr. Tröndle eventually found an ally in Roland Wäspe, the director of the Kunstmuseum St. Gallen, who attributed his initial interest in the project to his youthful background in physics and the fact the project, known as eMotion, was supported by the Swiss National Science Foundation. “I could not refuse,” he said.
At the core of Mr. Tröndle’s study was a fascination with museum settings in general and a curiosity about how particular arrangements of art objects affected human behavior, he said, speaking from his office at the Zeppelin University, in southern Germany, where he serves as a professor of arts management and art research. His study was conducted over two months, and during the intervening years processing data, he said he and his team established for the first time that “there is a very strong correlation between aesthetic experience and bodily functions.”
Mr. Tröndle defined the “art-affected state” as a sense of immersion in an artwork, or of feeling addressed by it. “These moments of art experience are fleeting and subtle,” he said, adding, “Whoever communicates with an artwork cannot converse with those in their company simultaneously.”
That visitors tended to feel more stimulated by sculptures and installations that impeded their progress through the galleries was also noteworthy. “People want to trip over the art,” he said.
Mr. Tröndle’s research has generated considerable excitement in Germany. During the opening of the prestigious Documenta art festival in Kassel, in June, for example, Die Zeit magazine published a feature with diagrams, presenting the various visitor types and their habits.
It has piqued the interest of museum administrators and arts scholars, and Mr. Tröndle was invited to present his findings at cultural conferences in Barcelona, Taipei and Vienna over the summer. This month he’s on a speaking tour at American universities, like the University of Chicago, the Massachusetts Institute of Technology and New York University. And Oct. 29, he is scheduled to give a lecture, “Experiencing Exhibitions — Empirical Findings,” at the Smithsonian Institution in Washington. (His paper “A Museum of the 21st Century” is scheduled to be published in December in the journal Museum Management and Curatorship, he said.)
Still, although American museum administrators have expressed interest in Mr. Tröndle’s research, initial reactions to his study have been guarded.
“This technology is so new and so young,” said Paul C. Ha, director of the List Visual Arts Center at M.I.T. “We don’t know what we have yet. And, as we all know, data can be interpreted in any way.”
Bonnie Pitman, distinguished scholar in residence at the School of Arts and Humanities, University of Texas, Dallas, and co-author of the 2010 book “Ignite the Power of Art: Advancing Visitor Engagement in Museums,” said: “I’m not sure that just because you have more data, that gives you a better understanding of the very complicated set of issues involved in experiencing works of art.”
Ms. Pitman spent seven years studying visitor responses to art during her tenure as deputy director, and then director, of the Dallas Museum of Art, and is considered a pre-eminent scholar on the subject. Referring to Mr. Tröndle’s conviction that an elevated heart rate signals a more profound art experience, she said: “Those transcendent moments when you’re just completely awash in the color and beauty of a great Pissarro or Sisley or Monet — those moments aren’t necessarily going to raise your heart rate. They’re going to slow you down.”
Ms. Pitman offered an alternative view of Mr. Tröndle’s suggestion that visitors with more knowledge of art had a less profound appreciation of what was on exhibit. “As viewers become more experienced, their databank builds up, so they don’t need to spend as much time going from work to work and reading wall labels,” she said.
And at the suggestion that visitors to museums should check their friends at the door, she all but balked. “It doesn’t necessarily surprise me that a person participating in this study enjoys viewing art on their own. But the reality is certainly that the experience of looking at art is often a highly social one, so I think the accommodation of that in any study is really critical.”
Back at the Kunstmuseum St. Gallen, Mr. Wäspe has interpreted Mr. Tröndle’s findings more literally.
Given all of the recent attention on blockbuster exhibitions at vast museums, “you might assume that our future is not very rosy,” said Mr. Wäspe, referring to his smaller museum. He added that Mr. Tröndle’s research suggested that “we now have an advantage, because we see that, for an optimal art experience, museums have to be small, they have to be more empty, and they have to be, in the most positive sense, a place of contemplation.”
Of Mr. Tröndle’s suggestion that the more social one’s visit, the less one can remember of it, Mr. Wäspe said, “This means never go with your best friend through an exhibit, because you don’t do them any favors.”Continue reading the main story | <urn:uuid:31dcb134-4489-4af3-be30-abf00dd954f2> | CC-MAIN-2017-30 | http://www.nytimes.com/2012/10/28/arts/artsspecial/arts-emotional-tug-is-best-experienced-alone-a-study-finds.html?pagewanted=1&smid=pl-share&_r=0 | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428257.45/warc/CC-MAIN-20170727122407-20170727142407-00155.warc.gz | en | 0.972857 | 1,720 | 2.75 | 3 |
The mystery of a yeti finger taken from Nepal half a century ago has been solved with the help of scientists at Edinburgh Zoo.
The mummified remains have been held in the Royal College of Surgeons museum in London since the 1950s.
A DNA sample analysed by the zoo’s genetic expert Dr Rob Ogden has finally revealed the finger’s true origins.
Following DNA tests it has found to be human bone.
Source: BBC News
The long-lost relic from the Pangboche Buddhist monastery in the Himalayas had a dramatic history*. For a full story, check out this narrative at the Daily Mail. The story was always intriguing, but now, it turned out to be another dead end as the finger was not “unknown” but human adding another disappointment for Yeti enthusiasts following the embarrassing spectacle of the Russian Yeti “evidence”.
What happens now?
The hairy wildman stories are decades old. Yet, over and over again, the photos, footprints, even physical pieces such as the Yeti bone, alleged scalp, hair and alleged Bigfoot traces have not led us to the creature. This is disturbing. We ought to be progressing towards more knowledge of these creatures but that is not happening. Why? Are these creatures made of folklore and hope rather than flesh and blood? So far, that’s the more reasonable conclusion. The “concrete” evidence just crumbles away…
* There are conflicting stories about how Peter Byrne acquired the finger. Did he steal it or negotiate for it? For more on this, see this link. Regardless, the monks knew a good deal when they saw one. They capitalized on the fact that their monastery was famous thanks to their Yeti relics and were able to raise money for it by coming to America to show the samples. | <urn:uuid:16637f72-edf1-4974-944a-9fb534a254f5> | CC-MAIN-2016-18 | http://doubtfulnews.com/2011/12/yeti-finger-bone-mystery-solved-its-human-not-monster/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125897.68/warc/CC-MAIN-20160428161525-00209-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.966466 | 378 | 2.8125 | 3 |
Flower pots, also known as planters and containers, add interest to a garden landscape. Adding height, planting space and personality, the garden flower pot allows the gardener to further implement the style of the property into the landscaping. Concrete flower pots come in a variety of patterns and styles, from the classic faux bois planters to the straight lines of modern planters. Resin planters simulate the look of concrete, giving the same appearance with a variety of different advantages and disadvantages.
Both concrete and resin flower pots fulfill the same basic function. They allow for the planting of plants, trees and shrubs in a wide variety of locations, regardless of the soil. For example, planters may be situated along concrete driveways, inside entrance-ways and on patio or decks. In general, concrete flower pots are larger in size, while resin planters are available in a wide range of sizes.
The basics of container gardening apply to both concrete and resin flower pots. The pot should feature drainage holes in the bottom of the planter for proper drainage. Plants require watering on an increased frequency when placed in a flower pot. Frequent watering often aids in the leaching of nutrients from the planting medium, requiring extra fertilization.
Fill all flower pots with appropriate and sterile planting medium. Potting soils can be purchased commercially or mixed at home using a recipe. The final location of the flower pot indicates what types of plants should be planted in the pot. For example, a large flower pot in full sun could feature sun-loving annuals, while pots filled with shade-loving ferns should be located in a shady location.
Advantages of Concrete
Concrete forms a solid flower pot, able to support a large shrub or tree, even in windy conditions. Ideal for exposed locations, concrete weathers well and will hold up to any accidental bumps from playing children or lawn mowers. Concrete insulates the potting material, helping protect it from sudden temperature changes. The ability to absorb and radiate heat often results in root protection from early or late frosts. Given time, many concrete planters develop a patina or even support the growth of moss, giving an aged, worn appearance to the garden.
Disadvantages of Concrete
Concrete is an extremely heavy material. In many cases, large concrete flower pots filled with soil cannot be moved. Consider the total weight of the concrete, soil and plants before locating a concrete planter on a deck or balcony. Concrete flower pots feature a porous surface, which may need sealing in order to slow down the evaporation of water. The lack of drainage holes in a concrete planter may create problems as they will be hard, if not impossible, to add without damaging the planter. Concrete requires a thorough curing process, otherwise it affects the alkalinity of the soil.
Advantages of Resin
Resin flower pots provide the look of concrete without the weight. High-end resin planters are indistinguishable from the original concrete based on appearances alone. Extremely lightweight, resin simulates the look of natural materials well. The molding process allows for patterns, textures and finishes usually found on expensive planters. Ideal for porches and balconies, resin provides a durable planter featuring little moisture loss.
Disadvantages of Resin
Resin flower pots may not hold a large shrub or tree upright in windy locations. Resin flower pots may not be strong enough to hold the growing root systems of some plants, resulting in a split pot. As a non-porous surface, resin flower pots must have adequate drainage holes.
- Patio Flower Pot Arrangements
- Seal Garden Pots
- Build a Fountain Out of Flower Pots
- Strengthen Concrete
- Decorate Flower Pots
- Flowerpots That Fit Around Mailbox Posts
- Fix a Broken Concrete Planter
- Prepare Soil for a Concrete Slab
- Prepare a Plant Container
- Winterize Flower Pots
- Plant in Containers Sunken Into the Ground
- Tree Planter Sizes | <urn:uuid:792df3a7-22d0-449a-b32a-c16a1ce32f66> | CC-MAIN-2019-35 | https://www.gardenguides.com/129791-resin-flower-pots-vs-concrete.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00022.warc.gz | en | 0.888213 | 845 | 2.609375 | 3 |
What is Electrical Grid?
The electrical grid is the electrical power system network comprised of the generating plant, the transmission lines, the substation, transformers, the distribution lines and the consumer.
The electrical grid is divided into three main components2:
- GENERATION – There are two types of generation – centralized and decentralized. Centralized generation refers to large-scale generation far from consumption. This includes coal, nuclear, natural gas, hydro, wind farms and large solar arrays. The grid connects centralized power to consumers. Decentralized generation occurs close to consumption, for example rooftop solar.
- TRANSMISSION and DISTRIBUTION- Transmission includes transformers, substations and power lines that transport electricity from where it is generated to points of consumption. When electricity is at high voltages, transmission losses are minimized over long distances and resistive transmission lines. Therefore, at the point of generation, substations contain transformers that step-up the voltage of electricity so that it can be transmitted. Transmission is achieved via powerlines and can occur either overhead or underground. When it arrives at points of consumption, another substation is found to step-down the voltage for end-use consumption3.
- CONSUMPTION – There are various types of consumers; namely industrial, commercial and residential consumers. Each of these consumers has different needs but in general electricity delivers important energy services like light and power for appliances .
Around the beginning of the 20th century, there were over 4,000 individual electric utilities, each operating in isolation. These local utilities operated low-voltage power plants that served local customers through short distribution lines.
As the demand for electricity grew, particularly in the post-World War II era, electric utilities found that it was more efficient to interconnect their transmission systems. In this way, they could share the benefits of building larger and jointly-owned generators to serve their combined electricity demand at the lowest possible cost, while avoiding duplicative power plants. Interconnection also reduced the amount of extra capacity that each utility had to hold to ensure reliable service. With growing demand and the accompanying need for new power plants came an ever-increasing need for higher voltage interconnections to transport the additional power longer distances4.
The electrical grid is one of the most complex and outdated breakthroughs in the world. Currently, research is being done to determine how to optimize its performance for effectiveness. The most interesting example is the recently developed ‘smart grid’. The smart grid is simply the electrical grid enhanced by information technology, which turns the electrical grid into an intelligent network.
Recent blog posts about Electrical Grid
No items found. | <urn:uuid:af2385aa-2a4a-428a-a0f2-a33a73b7ef9d> | CC-MAIN-2023-40 | https://studentenergy.org/distribution/electrical-grid/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00385.warc.gz | en | 0.932418 | 545 | 3.796875 | 4 |
It’s impossible to talk about Earth Day without talking about technology. After all, scientific progress was the force behind the industrial revolution that triggered the environmental concerns we now face. The consumer tech market only exacerbates this problem, with its reliance on plastics and toxic heavy metals.
Looking back at the 20th century, however, one might be comforted by numerous examples of innovation triumphing over pollution. This Earth Day, we thought it appropriate to discuss some of the most harmful pollutants produced by the consumer technology industry, and to illustrate the various solutions that are either already available, or may become available in the near future.
The Bad News
When Monsanto created its all-plastic house at Disney’s Tomorrowland in 1957, the chemical corporation saw the home as a vision of the no-too-distant future: the 1980s. While decades of public and private investment in plastics would follow, the plastic home would not. What these early adopters failed to acknowledge was the immeasurable harm that plastics cause to the environment.
Obviously, plastic is a major component in most consumer tech products. It does not biodegrade easily, yet when it finally does break down, it releases a host of toxic chemicals that can impact personal health and ecosystems at large. The most worrisome byproduct, bisphenol-A or BPA, comes from polycarbonate plastics, which are frequently used in mobile phone and laptop casings (not to mention DVDs, water bottles, car parts, and much more).
Many governments have begun banning or regulating the production of BPA-containing plastics, but all plastics pose environmental problems, and worldwide demand for plastic remains high. It’s unlikely that plastic manufacturing will slow down in the near future. And plastic recycling rates are low, hovering around 8 percent overall in 2010, according to the EPA.
The Good News
The widespread adoption of mobile devices in recent years has driven up demand for polycarbonate plastics. In response to the toxic nature of this material, manufacturers have tried to develop more sustainable phones. In 2009, Samsung released the Reclaim, made from entirely biodegradable materials. More recently, they launched the Galaxy Exhilarate, which is made from 80 percent post-consumer waste.
Researchers have discovered several kinds of fungi that are able to safely break down BPA-containing plastics with the help of ultraviolet light. And some producers have invested in alternative materials, like liquid wood (derived from wood pulp), milk protein, polyester fabrics, and non-petroleum plastics made from corn, soy, or wheat.
The Holy Grail of electronics sustainability would be a device that self-destructs, dissolves, or “dies” on its own. It sounds fanciful, but the technology is very real. Scientists have already developed integrated circuit boards that can disintegrate in water. The field is called transient electronics, and is being fueled by both private and public research.
Polypropylene is a common polymer with minimal environmental hazard. It’s degradable when exposed to heat and UV radiation, so its usage in home appliances, aircraft, furniture and clothing is not widely opposed. It’s even promoted as a viable alternative to polyvinyl chloride (PVC), which is highly toxic.
The Bad News
Mercury is a liquid metal that exists naturally in the environment but is incredibly poisonous. Even so, it has numerous industrial and commercial applications, and is a byproduct of coal-fired power plants. Through runoff, mercury finds its way into water supplies, and as it passes up the food chain, the concentration increases, leaving people especially vulnerable to its harmful effects.
Among consumer goods, compact fluorescent light bulbs (CFLs) are the most well known carriers of mercury, but it also exists in any product with a fluorescent-backlit LCD panel, including TVs, computers, and other smaller electronics. Though it’s used in very small quantities in these products, mercury’s extremely toxic nature makes it difficult to dispose of these products in a safe manner.
The Good News
LEDs are a viable and arguably superior alternative to fluorescent lights. LED-backlit panels are fast becoming the dominant type of display. Since they contain no mercury, they’re safer then standard CCFL-lit LCD panels. But what most consumers will really care about is that LED displays just look better than conventional LCDs, and they’re thinner, too.
The Bad News
Phosphates are super-effective in dishwasher detergents, but they’ve proven disastrous to the environment. When they leach into bodies of water, phosphates work as nutrients for algae. In high enough concentrations, they can cause algal blooms, which starve fish and other marine life of oxygen.
The Good News
Aware of these dangers, manufacturers began removing phosphates from their detergents in the first decade of the 2000s, opting instead for enzyme-based solutions. While consumers have complained about the inferior cleaning power of these new detergents, the benefit to the environment is unquestionable. A total of 17 states have now banned phosphate detergents.
The Bad News
It takes energy to build and use gadgets and appliances, and on the whole, energy use contributes to greenhouse-gas emissions and in turn, climate change.
Unless you’ve been living under a rock for the past 15 years, you know there is no clear-cut solution to climate change. The problem is that environmental sustainability is a bit of a whack-a-mole scenario, where every solution—be it an alternative material or a new production process—seems to leave a new problem in its wake.
Take appliances, for instance: Make them as efficient as you’d like—it’s not going to offset the carbon footprint created by manufacturing and shipping it. So then invest in cleaner fuels and more sustainable logistics—but those don’t amount to much if governments don’t address the larger problems of fossil fuel consumption, energy production, and climate change.
The Good News...Maybe
Appliances become more efficient every year. That’s great. But these small gains don't offset the constant increase in energy usage as developing nations adopt a western lifestyle.
So some scientists are now looking at geo-engineering as a solution. None of these ideas have been implemented yet (and it’s unclear whether they ever will be), but here’s a quick look at some of the more promising possibilities.
Heat Deflection / Solar Shading: Studies have shown that blocking just eight percent of the sun’s radiation would counteract the total warming effect of carbon dioxide pollution. There are a few ways of doing this, ranging from practical to absurd. One idea is to launch trillions of tiny space mirrors into orbit. That sounds…expensive, to say the least. But take a look at how the earth naturally cools itself, and you’ll find a few more ideas: Sulfur particles, which are released naturally by volcanic activity, can block sunlight. Another option is to create bright, highly reflective cloud layers by spraying the upper atmosphere with sea salts. But in either case, it’s hard to predict the unintended consequences.
Sequestration: According to the Department of Energy, oceans will eventually absorb 80 to 90 percent of atmospheric carbon dioxide. So why not expedite the process? Well, it's easier said than done. Aside from the limited scientific understanding of the repercussions of essentially dumping our waste into the ocean, there’s the risk of accidentally unleashing vaults of oceanic methane, which is a much worse greenhouse gas than CO2.
Ocean Fertilization: Scattering large amounts of iron into the world’s seas would generate phytoplankton blooms. The plankton, which are photosynthetic, would consume CO2, then fall to the ocean floor and sequester the carbon dioxide. However, there’s a strong likelihood that such a process would poison entire ecosystems—and theoretical tests have mostly failed, anyway.
While there aren’t any appliances or gadgets that are truly environmentally friendly, new technology is on the road to lessening the impact. Happy Earth Day! | <urn:uuid:c9a2e6ac-80cb-4449-800b-d61a5fb23f07> | CC-MAIN-2017-30 | http://dishwashers.reviewed.com/features/pollutant-solutions-for-gadgets-and-appliances | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448198.69/warc/CC-MAIN-20170728103510-20170728123510-00156.warc.gz | en | 0.9443 | 1,703 | 3.328125 | 3 |
SANTIAGO – The recent calving of a large iceberg from a southern Chilean glacier threatens local ship navigation and could result in flooding for coastal communities, experts said.
An iceberg measuring some 350 by 380 meters broke from the Grey glacier in far southern Chile in late November. The size of the break surprised local scientists who monitor the glacier.
“Events like this are part of a short-term irreversible tendency” due to rising global temperatures, said Raul Cordero, a climate change expert at the Universidad de Santiago.
The iceberg now seems like a large chunk of ice, “but it will become a threat” since it will move out to sea and break up into smaller pieces, said Ricardo Jana, a glaciologist at the Chilean National Antarctic Institute.
Given its size, the smaller icebergs likely to break off can create problems for area navigation, Jana said.
The icebergs will also contribute to a rise in the sea level, “putting coastal communities at risk for possible flooding,” Jana said.
The Grey glacier is located at the Torres del Paine National Park, some 3,200 kilometers (2,000 miles) south of the capital Santiago.
Over the past 30 years the glacier — now measuring some 270 square kilometers — has lost about two square kilometers of ice.
The glacier is part of the Southern Patagonia Ice Fields, the third largest land-based ice field after Antarctica and Greenland. The Ice Fields straddle southern Chile and Argentina.–MercoPress | <urn:uuid:91c685da-ac80-41e2-843b-f18df03ea3f5> | CC-MAIN-2019-13 | https://santiagotimes.cl/2017/12/10/calved-glacier-on-the-loose-in-south-chile-threatening-local-ship-navigation/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00378.warc.gz | en | 0.918479 | 315 | 3.28125 | 3 |
Snake Island - Laotie Mountain
The Snake Island–Laotie Mountain biosphere reserve is located in the west of the Dalian Lushunkou District and covers a total area of 9 072 hectares. Among its primary targets for protection are the Agkistrodon shedaoensis snake, migratory birds and ecological environments.
Declaration date: 2013
Administrative authorities: Liaoning Snake Island-Laotie Mountain National Nature Reserve Authority
Surface area: 9 072 ha
Core area: 3 565 ha
Buffer zone: 1 947 ha
Transition area: 3 560 ha
Laotie Mountain: longitude: 121°04’53“E - 121°15’19”E; latitude: 38°43’02“N - 38°57’16”N
Snake Island: longitude: 120°58’00“E - 120°59’15”E; latitude: 38°56’28“N - 38°57’41”N
Snake Island is located in the northwest part of the Bohai Sea. It covers an area of 73 hectares and is host to about 20 000 venomous pit vipers, Gloydius shedaoensis, a species endemic to the island. The Gloydius shedaoensis feed on small migratory birds and their two peak periods of activity are synchronous with the migrations. Snake Island forms part of an independent island ecological system called the ‘Gloydius Shedaoensis-Migratory Birds Environment’.
In 2004, Gloydius shedaoensis was listed on the China Red Data Book of Endangered Animals and categorized as ‘critically endangered’.
The Laotie Mountain is located at the southern tip of the Liaodong Peninsula and faces the Yellow Sea to the west and the Bohai Sea to the east. It is an important ‘node’ for bird migration with over 10 million birds migrating through the area.
There are four types of vegetation: forest, brush, shrub-meadow and meadow. The existing fauna record includes 703 species of plants, 217 species of amphibians, 10 species of reptiles and 307 species of birds.
The biosphere reserve has more than 6 000 inhabitants including permanent residents and a temporary population, mainly distributed throughout the transition zone. Most of the population is Han (98.2 percent) with a minority of Manchu, Mongolian, Hui and Korean (1.8 percent). Local residents mainly plant cherry and apple trees, raise poultry, and breed sheep, mink, raccoon and fox. Only a small percentage of residents cultivate wheat, corn and other crops.
> Back to Biosphere Reserves in China
Last update: August 2013 | <urn:uuid:48eb6a4c-644a-4f9c-a7c1-a19c34c26cde> | CC-MAIN-2014-35 | http://www.unesco.org/new/en/natural-sciences/environment/ecological-sciences/biosphere-reserves/asia-and-the-pacific/china/snake-island-laotie-mountain/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00377-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.882706 | 581 | 2.671875 | 3 |
MOUNTAIN GALLERY VIDEO
Created on June 25, 2023
Speaker: Melany Martins
Environment can be defined as a set of physical, chemical and biological factors that allows life in its most diverse forms. All people have the right to a balanced environment, so its preservation is essential.
It is essential that we protect the environment, because our lives depend on it:
- Do not waste water. ...
- Save energy. ...
- Do not buy unnecessary products. ...
- Separate the trash. ...
- Do not throw garbage on the streets. ...
- Walk further...
- Repurpose. ...
- Do not buy wild animals.
How to preserve the environment?
The environment is composed of four different spheres: atmosphere, lithosphere;
Hydrosphere and biosphere.
Preserving the environment is fundamental, after all, it is where the natural resources necessary for our survival are, such as water, food and raw materials. Without these resources, all forms of life on the planet could end.
what is your environmental contribution to a better world
2. SAVE RESOURCES
3. OPT FOR PUBLIC TRANSPORT
What can happen if we don't protect our planet?Excessive waste production, contamination of ocean and river waters, air pollution, the greenhouse effect and climate change are just a few examples of the consequences of ongoing environmental degradation.
Some of the main environmental problems currently include: greenhouse effect, deforestation, water scarcity, pollution of seas and oceans, air pollution and soil degradation.
What are the environmental factors that influence the life of living beings?Are they:
- Biotic factors: producers (plants and algae), consumers (herbivores and carnivores) and decomposers (fungi and bacteria).
- Abiotic factors: water, light (light energy), heat (thermal energy) and nutrients (chemical substances).
with this work it is possible to help them in a participatory and creative way, so that they understand how much we are active subjects of our actions and that we have a fundamental role in the care of our planet. | <urn:uuid:17d8f89f-7549-479e-9472-cd16055bc511> | CC-MAIN-2023-40 | https://view.genial.ly/6498979115d1f70013008fa7/video-presentation-mountain-gallery-video | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00114.warc.gz | en | 0.9052 | 463 | 3.6875 | 4 |
Smart meters will increasingly replace conventional gas and electricity meters as national grids become more flexible, efficient and adaptable to renewable energy technologies such as wind and solar. They offer a wealth of intelligent functions including the ability to inform consumers how much energy they are using, via a display installed in their home. They can also communicate directly with energy suppliers thereby eliminating the need for staff to visit homes to read the meter. They do this by sending out a signal, rather like a mobile-phone signal, which delivers the meter reading straight to the energy supplier. It works the other way round as well, enabling the energy supplier to send information to the display in consumers homes.
The UK government’s Department of Energy and Climate Change (DECC) is currently involved in a smart meter rollout program. The government will in future require energy companies to install smart meters for customers and is currently establishing rules and regulations to ensure that they do so in a way that is to the benefit of the consumer, including rules around data access and privacy, security, technical standards for smart metering equipment and specific requirements for vulnerable consumers such as elderly people and the disabled. The aim is to make smart meters a standard fitting in UK homes by 2020, although there will not be a legal requirement for householders to have one installed. Most UK householders will have a smart meter installed by their energy company at some point between 2016 and 2020.
What are the real benefits of smart meters for consumers? Six important ones are visibility, savings, accuracy, cleaner energy for consumers, the ability to set targets for green energy installation and encouraging consumption at different times of the day through time of use tariffs.
The importance of smart meters for the visibility of information for consumers lies in the fact that with current conventional meters it is very easy for householders to use more energy than they actually need. However, smart meters enable consumers to see exactly how much energy they are using and when they are using it. It can also hold historical information about past energy use so that householders can compare their present level of energy consumption with past usage. This, in turn, enables householders to save money by reducing their household energy bills. British Gas has found that many of its customers are pleased with the user-friendliness of smart meters and the ability to find out quickly how much energy is actually being used and how to manage energy use more efficiently.
These features enable smart meters to help consumers save money on electric bills. At present, householders receive estimated bills after the energy has been used. This creates difficulties in that it is often very difficult to compare costs with usage. In some cases, energy suppliers will provide online tools for consumers who install smart meters in their homes or businesses. These online tools are aimed at helping consumers to manage their energy usage more efficiently. British Gas is already doing this with their Business Energy Insight tool for business customers. First Utility began to offer their customer’s smart meters back in 2010, although initially, the company experienced a number of customer service problems. However, this enabled other companies to see how the technology performed. Very soon afterward, British Gas got in on the game too and other companies such as Ovo Energy followed. The smart meter won’t, of course, save energy by itself, but if consumers learn to read the information it provides on the in-house display unit, the hope is that it will start to encourage a whole new pattern of energy consumption behavior.
Smart meters send accurate information through to the power company, eliminating the need to make estimates. They are rigorously tested even before they leave the factory, so there is no doubt about this. However, energy suppliers will also offer to test smart meters in the home when required to ensure the meter is doing what it should and providing accurate information. Additionally, some countries are requiring energy suppliers to provide test results in order to prove their equipment is supplying the right information, as well as requiring them to abide by particular standards such as those set by the American National Standards Institute. So far, tests have shown that the smart meters already deployed are just as accurate as analog meters and in some cases even more so.
Cleaner, greener energy
According to Sacha Deshmukh from Smart Energy GB writing on BusinessGreen, government figures released by DECC have shown that the two million smart meters deployed in the UK so far are having a positive effect on the way people think about energy. Smart Energy GB’s Smart Energy Outlook research found that 84 percent of customers are pleased with the way the meters are operating, with energy efficiency being cited as one of the major themes of the report. The research also found that 79 percent of customers were encouraged to take steps towards reducing their energy use after having a smart meter fitted. Alongside energy efficiency, smart metering allows energy suppliers to collect data information about energy use which in turn allows them to analyze overall supply and demand issues. This, in turn, allows them to conduct much more efficient billing operations when it comes to electricity provided by intermittent renewable energy technologies and distributed energy resources. As conventional fossil fuel plants become more expensive to operate with increased penetration of the market by renewables, this will become increasingly important for utilities, particularly with regard to cost. Furthermore, with positive regulatory environments set by governments, smart metering will actually encourage renewable energy deployment in new markets, such as energy storage, demand reduction or powering car batteries and household appliances.
Green energy targets
Then is the setting of targets for renewable energy deployment. Smart meter data will enable governments and other organizations to create environmental initiatives aimed at limiting the environmental impacts of energy use.
“The smart meter market is expected to prosper, owing to the recent impetus from renewable energy and smart grid implementation,” said Frost & Sullivan research analyst Neha Vikash, speaking to mobile communications company Vodafone. “Smart meters are required for integration of renewable energy. Europe is focused on meeting the 20-20-20 targets, which is a necessary driver for the increase in renewable energy and the third energy directive targets 80 percent smart meter penetration in the residential sector by 2020.”
In the UK, more than 30 percent of the country’s electricity supply could come from renewable sources by 2020. However, renewable energy is dependent on weather patterns and that means renewables are intermittent. For this reason, smart meters are required to finely balance energy supply in a market that by its very nature has to provide stable supplies. It means much greater control over how energy is provided. That is a radical change in the way most countries in the world supply energy to consumers and smart metering occupies an essential place at the heart of this transition.
An essential part of this transition will be the increasing use of time-of-use tariffs. These are different to the current electricity bills sent to customers thus far in that they split daytime and evening energy consumption into different time periods, each of which has varying price blocks. This enables energy companies to create tariffs adjusted to ‘off-peak’ periods with lower prices and encouraging customers to adjust their consumption so that they use energy when it’s cheaper. This is a matter of personal planning. For example, why switch the washing machine on when people are coming home from work and making cups of tea, switching on lights, using microwave ovens and so on? It may be more economical, using time-of-use tariffs, to wait for an off-peak period with the benefit that consumers will be charged less on their bills as a result.
There are indeed some people who are questioning the implementation of smart meters and smart technology in the same way as they are questioning deployment of renewable energy. However, increasingly, most people across the world are recognizing that things have to change in the way the world uses energy. Smart meters is just one way in which to do that. | <urn:uuid:e97fae19-2839-4f88-8635-19ab6b60ffb8> | CC-MAIN-2020-16 | https://interestingengineering.com/6-important-benefits-of-smart-meters | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00102.warc.gz | en | 0.958723 | 1,589 | 3.28125 | 3 |
- an amplifier for increasing the power of a signal.
Origin of power amplifier
First recorded in 1915–20
Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2018
- electronics an amplifier that is usually the final amplification stage in a device and is designed to give the required power output
Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012 | <urn:uuid:fa91aeb3-f478-4c98-a740-79ce4645f5b9> | CC-MAIN-2018-47 | https://www.dictionary.com/browse/power-amplifier | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00275.warc.gz | en | 0.804768 | 119 | 3.15625 | 3 |
To arrange for an interview with a researcher, please contact the Communications and External Relations staff member identified at the end of each tip. For more information on ORNL and its research and development activities, please refer to one of our Media Contacts. If you have a general media-related question or comment, you can send it to [email protected].
ENERGY -- Spent fuel pellets . . .
Oak Ridge National Laboratory researchers have made the first mixed-oxide pellets from recycled spent nuclear fuel in a process that doesn't produce a separate plutonium stream. The work is the result of the Coupled End-to-End Demonstration Project for the Department of Energy's Global Nuclear Energy Partnership. Program Manager Jeff Binder of the Nuclear Science and Technology Division said conventional reprocessing methods pull plutonium separately from the spent-fuel mix of actinides (uranium, neptunium and plutonium) and are a proliferation concern. Binder said the ORNL technique, called modified direct denitration, converts a uranium-neptunium-plutonium nitric acid solution to a solid-oxide form. Traditionally, actinides taken out of a nitric acid solution are in a glassy structure that has to be processed with steps such as milling and grinding. The solid-oxide powder from the modified direct denitration process can go right to pellet form. The process is the first separation of spent nuclear fuel where plutonium isn't pulled out by itself and the product material is taken directly to making a pellet. The work is supported by the Department of Energy's Office of Nuclear Energy. [Contact: Bill Cabage, (865) 574-4399; [email protected]]
MATERIALS -- Simplifying complexity . . .
Tiny changes at the nanometer scale can have a colossal effect on the properties of a material, and for the first time researchers may have a method to see and even predict those changes. For example, by applying a magnetic field to certain single-crystal materials, researchers measure an enormous seemingly disproportionate change in the magnetoresistance. "That doesn't sound very interesting until you remember that your computer hard drive relies on giant magnetoresistance," said Zac Ward, lead author of a paper published in Physical Review Letters and a member of the Materials Science and Technology Division. By applying the concept of complexity to the study of materials at the nanoscale, scientists hope to be able to see the interrelations between base components and tune the materials to create previously unseen properties. "If we are able to unravel exactly how everything at the atomic level interacts we should be able to better engineer devices from materials that are based on complexity," Ward said. Co-authors are Jian Shen, Shuhua Liang, Kenji Fuchigami, Lifeng Yin, Elbio Dagotto and Ward Plummer. The research was funded by the Department of Energy and the National Science Foundation. [Contact: Ron Walli, (865) 576-0226; [email protected]]
PREPAREDNESS -- Battling terrorists . . .
People living in small towns and big cities alike will be a lot safer from the risk of improvised explosive devices because of an ongoing effort being coordinated by Oak Ridge National Laboratory for the Department of Homeland Security. While there are numerous partners and facets of the project, the goal is to integrate existing commercial and government software to maximize the ability to respond to a threat and to prevent bombings in the first place. When completed, the DHS Office for Bombing Prevention will have a portable procedure, dubbed TRIPwire Field Tool, to conduct bomb squad assessments, perform site assistance visits and develop multi-jurisdictional security plans for incident response. The tool uses a geospatial framework to show physical relationships between the planning site, security partners, potential event and response. [Contact: Ron Walli, (865) 576-0226; [email protected]]
MATERIALS -- Under the microscope . . .
A new generation of electron microscope at Oak Ridge National Laboratory is helping scientists examine materials for fuel-efficient cars, superconductors, solar cells and other applications. The lab's latest instrument, the Hitachi HF-3300 transmission electron microscope, is the first of its kind in the nation and can determine the microstructure and chemical makeup of materials down to the atomic level. "These microscopes have become a vital new testing ground, accelerating advanced materials research," said Jane Howe of ORNL's Materials Analysis User Center. "By looking at structure on an atomic level, we can predict whether a material has the required properties to perform well in tomorrow's high-demand applications." Funding for the microscope was provided by the Department of Energy's Energy Efficiency and Renewable Energy and Electricity Delivery and Energy Reliability programs, and DOE's Office of Basic Energy Sciences. The instrument is shared by the High Temperature Materials Laboratory, the Center for Nanophase Materials Sciences, and the Shared Research Equipment user programs at ORNL. [Contact: Sarah Wright, (865) 574-6631; [email protected]]
|Contact: Ron Walli|
DOE/Oak Ridge National Laboratory | <urn:uuid:c10f8b2c-ef85-470a-895e-ae08816f57d4> | CC-MAIN-2015-48 | http://www.bio-medicine.org/biology-news-1/Story-tips-from-the-Department-of-Energys-Oak-Ridge-National-Laboratory--July-2008-3916-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449160.83/warc/CC-MAIN-20151124205409-00267-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.909925 | 1,077 | 2.515625 | 3 |
Dementia is now the number one cause of death in the UK, but new information released by scientists show that reducing the risk of developing the condition could be as simple as one, two, three…
In a plan drawn up by some of the world's top doctors, seven steps to staving off dementia and cardiovascular disease have been identified – including managing blood pressure and exercising regularly.
The 'Life's Simple Seven' programme by the American Heart Association – which was drawn up following a review of 182 scientific studies – is designed to improve health for all "by educating the public on how best to live" through making simple, effective lifestyle changes. The seven steps are:
- Manage blood pressure
- Control cholesterol
- Reduce blood sugar
- Get active
- Eat better
- Lose weight
- Stop smoking
According to the plan, those following the seven steps lessen the risk of cognitive decline as they age while simultaneously reducing the risk of life threatening conditions such as a heart attack or stroke. This suggests that heart health and dementia are more closely linked than one might think, as Vascular neurologist Dr Philip Gorelick, who worked on the plan, explains:
"Over time the arteries carrying blood to the brain may narrow or become aged, which can lead to dementia. The good news is that managing risk factors – and managing them early on – can keep those arteries strong and make a world of difference for our long-term brain health."
Adding to this is Dr Gorelick, of Mercy Health Hauenstein Neurosciences in the US, who says:
"Research convincingly demonstrates the same risk factors that cause atherosclerosis (a hardening of the arteries, often leading to blood clots) are also major contributors to late-life cognitive impairment like Alzheimer's disease. By following [the plan] not only can we prevent heart attack and stroke, but we may also be able to prevent cognitive impairment."
A healthy mind
Generally speaking, a healthy brain is defined as one that can 'pay attention, receive and recognise information from our senses, learn and remember, communicate, solve problems and make decisions, support mobility and regulate emotions'. To keep your brain in tip-top condition throughout your life, you need to start taking care of yourself at an early age. Dr Gorelick adds:
"Studies are ongoing to learn how heart-healthy strategies can impact brain health even in early age… [But] the outlook is promising. Over time we have learned the same risk factors for stroke are also risk factors for Alzheimer's disease."
Commenting on the above recommendations, Dr Laura Phipps of Alzheimer's Research UK told The Express:
"We know that many things people can do to promote physical health can also have a positive impact on the brain. It is never too early or too late to adopt a healthy lifestyle."
The review was published in the journal Stroke. | <urn:uuid:3382e681-4d94-465c-87d8-d56d89b2f321> | CC-MAIN-2019-04 | https://www.netdoctor.co.uk/healthy-living/wellbeing/news/a28836/healthy-heart-brain-steps/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00025.warc.gz | en | 0.948596 | 592 | 2.90625 | 3 |
Microeconomics (13th Edition)
When asked to describe this text, most Lipsey readers use the same word: precise. The authors do not gloss over subjects when presenting economic ideas; rather, they offer a patient explanation of the concept and back it up with the latest research and data. Taken separately, neither theory nor data alone can give readers a true understanding of the idea, but when combined these elements give students a complete view of economics in the real world.
What is Economics?: Economic Issues and Concepts; How Economists Work. An Introduction to Demand and Supply: Demand, Supply, and Price; Elasticity; Markets in Action. Consumers and Products: Consumer Behavior; Producers in the Short Run; Producers in the Long Run. Market Structure and Efficiency: Competitive Markets; Monopoly, Cartels, and Price Discrimination; Imperfect Competition and Strategic Behavior; Economic Efficiency and Public Policy. Factor Markets: How Factor Markets Work; Labor Markets; Interest Rates and the Capital Market. Government in the Market Economy: Market Failures and Government Intervention; The Economics of Environmental Protection; Taxation and Public Expenditure. The United States in the Global Economy: The Gains from International Trade; Trade Policy.
For all readers interested in microeconomics.
Specifications of Microeconomics (13th Edition)
|Author||Richard G. Lipsey, Christopher T.S. Ragan, Paul Storer|
|Number Of Pages||576|
Write a review
Note: HTML is not translated!
Rating: Bad Good
Enter the code in the box below: | <urn:uuid:fbd6f0f4-6dce-4fab-ac71-6a12ae47c240> | CC-MAIN-2017-43 | https://ergodebooks.com/index.php?_route_=microeconomics-(13th-edition)-DADAX0321369254 | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00064.warc.gz | en | 0.755087 | 323 | 3.71875 | 4 |
The Governor’s Eugenics Compensation Task Force advocates that North Carolina provide reparations to surviving victims of the state’s past sterilization program. The program, which spanned from 1929 to 1974 — most popular during the 1930s — subjected 7,600 residents to forced sterilization, of whom analysts estimate 1,500 to 2,000 are still alive today.
Former Gov. Mike Easley apologized to the sterilization victims in 2002, but no compensation was ever agreed upon. Although about half a dozen states have issued public apologies for their own sterilization programs, North Carolina is the first to mull over a concrete reparation plan.
Click here to read the entire article.
Photo: Planned Parenthood founder Margaret Sanger | <urn:uuid:1ec6456c-6c42-4ece-88cb-af5eeb80fa73> | CC-MAIN-2014-23 | http://www.jbs.org/give-now/n-c-seeks-reparations-for-victims-of-infamous-sterilization-program | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997902579.5/warc/CC-MAIN-20140722025822-00206-ip-10-33-131-23.ec2.internal.warc.gz | en | 0.928035 | 149 | 2.953125 | 3 |
In Willpower, Baumeister and Tierney convincingly describe another addendum: willpower depends on glucose as an energy source.
HFCS contains at most 55 percent fructose and in some forms only 43 percent; almost all the rest is glucose.
After swimming, foods high in glucose may help aid in recovery.
Add the chocolate, glucose, and remaining butter, whisk until smooth and set aside.
Dieting, as the glucose breakthrough reveals, provides an especially tricky test of willpower.
It behaves the same as glucose with all the ordinary tests, and can be distinguished only by polarization.
This was directly established for glucose, lævulose, galactose, and arabinose .
This compound may be prepared from glucose (C6H12O6), a sugar easily obtained from starch.
Why must the starchy foods be changed in the body into sugar, or glucose?
The presence of grape sugar or glucose indicates the disease known as diabetes.
1840, from French glucose (1838), said to have been coined by French professor Eugène Melchior Péligot (1811-1890) from Greek gleukos "must, sweet wine," related to glykys "sweet, delightful, dear," from *glku-, dissimilated in Greek from PIE *dlk-u- "sweet" (cf. Latin dulcis). It first was obtained from grape sugar.
glucose glu·cose (glōō'kōs')
A monosaccharide sugar the blood that serves as the major energy source of the body; it occurs in most plant and animal tissue. Also called blood sugar.
A monosaccharide sugar found in plant and animal tissues. Glucose is a product of photosynthesis, mostly incorporated into the disaccharide sugar sucrose rather than circulating free in the plant. Glucose is essential for energy production in animal cells. It is transported by blood and lymph to all the cells of the body, where it is metabolized to form carbon dioxide and water along with ATP, the main source of chemical energy for cellular processes. Glucose molecules can also be linked into chains to form the polysaccharides cellulose, glycogen, and starch. Chemical formula: C6H12O6. See more at cellular respiration, Krebs cycle, photosynthesis. | <urn:uuid:86e9a402-5b74-4e8a-81b3-5ac8178e0a0d> | CC-MAIN-2015-32 | http://dictionary.reference.com/browse/glucose | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00163-ip-10-236-191-2.ec2.internal.warc.gz | en | 0.934973 | 499 | 3.015625 | 3 |
Wednesday, December 10, marks the 60th anniversary of the adoption of the United Nations' Universal Declaration of Human Rights. Spearheaded by former U.S. first lady and U.N. delegate Eleanor Roosevelt, the UDHR guaranteed the political and civic rights of all people, including the right to freedom from torture, slavery, poverty, homelessness and other forms of oppression.
Most people assume today that the guarantee of human rights is an essential feature of all civilized societies. But the UDHR was a product of a unique historical moment, says Larry Cox, director of the human rights group Amnesty International USA.
"It came out of the horrors of World War II, the Great Depression and, of course, the indescribable horrors of the Holocaust that made the world realize that something had to be said about basic human rights," Cox says. "It was no longer a question of individual states doing whatever they want to for their citizens, because the way that governments treat their citizens affects the whole world and especially the peace and security of the whole world."
Cox adds that while the fledgling U.N. General Assembly ultimately passed the UDHR by 48-0 vote, a huge diplomatic effort was required to get disparate nations to agree on exactly what "human rights" are or should be. Communist countries proffered one view, while capitalist and Islamic countries had their own perspectives.
"And what they did was say basically 'What are the things that every human being needs to have to live a fully human life?' And that led them to recognize not only freedom from fear - that is civil and political rights - things like freedom of speech, the right not to be tortured, the right to have a fair trial, but also economic and social rights - the basic needs that need to be met in order to live a life of dignity and freedom. So it includes a right to an adequate income; it includes the right to housing. And it sees all of those rights as interdependent."
The Universal Declaration of Human Rights is not a treaty. However, because its purpose is to define the terms "fundamental freedoms" and "human rights" embedded in the U.N. Charter, all U.N. member nations are bound by it. Cox says the UDHR has acquired the force of international law and has bolstered human rights movements.
"We've seen dictatorships fall in Latin America. We've seen the end of the Soviet Union and other dictatorships. We've seen the end of [racial] apartheid," Cox says. "You can look at countries like Argentina and Chile, where the Mothers of the Disappeared cited the declaration when they went out to protest what was happening and mobilized the governments. And eventually, of course, those dictatorships fell. You can look at the Philippines, where people also cited the declaration in fighting the dictatorship of Marcos. None of those things happened overnight. But… all of this is, in a sense, the legacy of what governments promised in 1948."
Those promises have not always been kept, of course. Peggy Hicks, global advocacy director of Human Rights Watch, says that developing and developed nations alike have been accused of violating provisions of the UDHR, in particular Article Five, which forbids torture.
"And there are, in today's era, those who would say there are circumstances where that prohibition should not bar activities that would help more people than would be tortured by the breach of it," Hicks says. "But the commonality in the Universal Declaration is to reject that argument and to say that, 'No, you as a government and as a subscriber to the Universal Declaration have made a commitment that, regardless of the arguments for national security or other exigencies, recognize that the damage that would be done [by torture] far outweighs any advantage that could be obtained."
According to Article 25 of the UDHR, "Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food…" That is why it is considered a human rights violation for any government to prevent the distribution of humanitarian aid to any part of its population or to allow economic discrimination against any population group.
"And what we've seen is that in societies where human rights are not respected, they are not able to pursue problems of development in terms of housing and education, health, the right to food, effectively," Hicks says. "A good common-day example of that is in Zimbabwe, where the humanitarian crisis and the human rights crisis are inextricably interlinked. And we look at today's economic crisis and wonder, 'What will be the impact of that on human rights?'"
Cox acknowledges that there is a long way to go before human rights are universally respected. But he believes that since its passage in 1948, the Universal Declaration of Human Rights has been much more than mere words expressing an abstract ideal.
"And we have seen over the decades what Martin Luther King called 'the human rights revolutions,' that is to say people who did whatever they had to do - write letters, petition, go to the streets - to say to the governments, 'You made these promises. We insist now that these rights be respected and fulfilled.'"
Indeed, adds Cox, action is what makes the Universal Declaration of Human Rights a "living" document, not something just to be remembered or invoked in ceremonies, but something to be fought for, celebrated and fulfilled every single day. | <urn:uuid:cb93b570-57bc-41c4-81e2-2ea83889a7f0> | CC-MAIN-2017-09 | http://www.voanews.com/a/a-13-2008-12-09-voa49/403157.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00312-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966079 | 1,108 | 3.46875 | 3 |
A reminder of our slavery past
Karwan Fatah-Black of Leiden University, an expert in the area of Dutch colonial history, wrote the text for a slavery remembrance memorial that was unveiled on 1 July in Hoofddorp. The official abolition of slavery was proclaimed 155 years ago on 1 July 1863. On Sunday 1 July 2018 the national remembrance of the Netherlands’ slavery past took place. As yet, however, the national Slavery Remembrance Day is still not an official national remembrance day.
No official expression of regret
Kajsa Ollongren, the Minister of the Interior, offered an expression of regret for the slavery past on behalf of the government. She spoke of profound regret, shame and remorse. Ahmed Aboutaleb, the mayor of Rotterdam, and Jan Hamming, the mayor of Zaanstad, have previously asked for an official apology from the Dutch government, but no such apology has been issued. Fatah-Black: ‘Every year, many people hope that the government will apologise. In my opinion, offering an apology is so important that it should come from the head of state and not a minister. It’s something that people are waiting for, and at some point it will surely happen.’
Karwan: ‘1 July is starting to become part of our collective memory and is included much more in our education. It’s becoming part of the national identity and our self-image. Other memorials are also planned, for communities that wish to join with each other in remembrance of slavery.
‘I was recently at the unveiling of a slavery memorial in Hoofddorp. This makes it the fourth city with a memorial. In Hoofddorp too there are people who are descendants of the enslaved. The municipality of Haarlemmermeer doesn’t have a direct link with the slavery past, but the executive councillor spoke about how, when a lot of money was being put into building dykes to drain the land of Haarlemmermeer, the abolition of slavery was considerably delayed for financial reasons. This says a lot about the attitude in those times.’
Slavery education on a structural basis
‘We’re still not aware enough of how the colonial past continues to have effects in the present. It would be good to talk about this more in Dutch education. It would have to be done on a structural basis, because then you can also explain the different facets of this history. It’s important to understand why there are still Caribbean parts of the Kingdom and that people from Suriname aren’t newcomers, but have been part of Dutch history for a very long time. This can give a clearer understanding of why the Netherlands is as it is today, and how the past still has effects in the present.’
Memorial in Hoofddorp
‘The memorials and remembrances that are now organised in many cities are an important aspect of the growing awareness of the colonial roots of the present day, so I felt highly honoured when Haarlemmermeer municipal council asked me to help with developing their memorial: both the text on the memorial itself, and the accompanying text on a stone next to the memorial. The memorial itself bears a quotation from its initiator, Elaine Veldema: “What happened then is not yet over” (Wat gebeurd is, is nog niet voorbij).
‘The council didn’t want it to be an exclusively Suriname-Netherlands memorial, so for the memorial itself I chose to include several abolition dates: first the slave trade, then on Sint Maarten in 1848, coinciding with the French abolition, then in the Dutch East Indies, and finally in 1863 in Suriname and on the Caribbean islands.
‘For the accompanying text it was a challenge to give a nuanced explanation in just a few words, which does justice to the extreme violence, the historical context and the present-day relevance. The text is:
“Descendants of the enslaved are to be found in all parts of the Netherlands. During colonial times, when international law and human rights did not exist, people were dragged violently into Dutch history. On plantations, in households and on ships, what followed was often a short and traumatic life.
Since the abolition of slavery was proclaimed worldwide, everyone in the world is born free. No-one is now permitted to own another. The abolition of slavery is one of the fundamental principles of human rights. But this history has not ended. Even today, there are still people who live in modern slavery.”’
New life for freed slaves
‘In my own research, you see that people who came out of slavery at that time found it extremely important to obtain a piece of land. Wherever it might be: part of a former plantation or in the city. Because slavery was really an uprooting, having a piece of land meant that as free people they could start to become attached to a new place. Owning land and being able to pass it on to subsequent generations was incredibly important for those people. This has resulted in many problems, and land ownership in Suriname is hopelessly fragmented. People from Suriname often complain about the inheritance problems with dividing estates and the family arguments that these cause. In fact, all those land ownership leftovers are the link between themselves and the beginning of their ancestors’ emancipation. My book on this, Eigendomsstrijd (Ownership Conflict), will be published in September.’ | <urn:uuid:6cd630c5-9699-46bc-8eed-35535c8df483> | CC-MAIN-2019-47 | https://www.universiteitleiden.nl/en/news/2018/07/a-reminder-of-our-slavery-past | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00489.warc.gz | en | 0.967157 | 1,155 | 3.125 | 3 |
1. What is the cause of obesity
in the United States?
To complete this project our group used a
Google Document to gather our
information into one place. We used
resources from online and Anne Belk
Library. To coordinate meetings we sent
texts and emails to each other.
3. Introduction: What is
Obesity refers to an increase in body fat; “overweight” is an increase in body weight
relative to some standard and has become a surrogate for “obesity” both clinically
and epidemiologically (Ripper).
Obesity is an “abnormal or excessive fat accumulation that presents a risk to health”
but this is not very specific (WHO, Obesity and Overweight).
BMI is the most common way to measure overweight and obesity but it does not take
into account body composition (Rossen) (Stern) (Cawley).
“„Overweight‟ is defined as having body weight than is considered normal or healthy
for one‟s age or build. The term „obese‟ is used for very overweight people who have
a high percentage of body fat.” (Stern)
“Overweight and obesity are defined as abnormal or excessive fat accumulation that
presents a risk to health. Population measure of obesity is the body mass index
(BMI), a person‟s weight (in kilograms) divided by the square of his or her height (in
meters). A person with a BMI of 30 or more is generally considered obese. A person
with a BMI equal to or more than 25 is considered overweight.” (WHO, Obesity)
Flaws in nutritional education, resources, and healthcare:
Doctors are not educated enough in nutrition (Chen, 2010)
Asking for a doctor‟s advice isn‟t always the best idea; consulting a dietician or
nutritionist would be more beneficial (Chen, 2010)
American healthcare is based off of treatment and not prevention (Fromke, 2012)
Doctors paid based on number of patients seen, operations, surgeries not on success
of treatment or prevention; quantity over quality (Fromke, 2012)
Food Pyramid has major flaws and isn‟t based in research (Willett, 2001)
"The thing to keep in mind about the USDA Pyramid is that it comes from the [U.S.]
Department of Agriculture, the agency responsible for promoting American
agriculture, not from agencies established to monitor and protect our health, like the
Department of Health and Human Services, or the National Institutes of Health, or the
Institute of Medicine.” (Willett, 2001)
Food labels based on faulty Food Pyramid system
6. Causes (con’t)
“Obesity usually results from interaction of certain gene polymorphisms with
environment. Moreover, only a small number of cases of obesity (5%) result from
mutations in specific genes (monogenic obesity), causing in some cases Mendelian
syndromes with a very low incidence in the population. One hundred and thirty genes
related to obesity have been reported, some of which are involved in coding of
peptide transmitting hunger and satiety signals, while others are involved in adipocyte
growth and differentiation processes, and still others are involved in regulation of
energy expenditure. In addition, obesity is a chronic inflammatory state. In this
regard, altered expression of genes related to insulin metabolism and adipose tissue
inflammation is a basic process which may explain the etiology of obesity” (González
“Many studies have shown an overall socio-economic gradient in obesity in modern
industrialized societies. Rates tend to decrease progressively with increasing socioeconomic status.” (Perez, 2013)
7. Causes (con’t)
Bad habits in childhood
Feeding Infants and Toddlers Study (FITS) in North America: “higher than generally
recommended energy, protein, and saturated fat intakes. The majority of infants are
bottle fed at some point in their first year of life, and their weaning diet often includes
low intakes of fruits and vegetables, with high starchy, rather than green or yellow,
vegetables. Early introduction of solids, use of cow's milk prior to 1 year of age, and
high juice intake in the first 2 years - all less desirable diet practices - are improving,
but are still prevalent. More preschoolers are likely to get sweets or sweetened
beverages than a serving of fruit or a vegetable on a given day” (Savedra, 2013)
“These food intake patterns mimic the adult American diet and are associated with an
increased risk of obesity in childhood and later life.” (Savedra, 2013)
“Obesity prevention needs to include specific targets in terms of breastfeeding and
adequate formula feeding, as well as appropriate introduction of weaning foods with
goals of changing the inadequate patterns documented in the FITS. These
interventions will also require addressing parent and caregiver behaviors, including
attending to hunger satiety cues (responsive feeding), and shaping early food
preferences. This needs to be done starting at birth, in the first months of life. Early
intervention offers a unique and potentially efficacious opportunity to shape the future
dietary patterns of the next generation.” (Savedra, 2013)
8. Causes (con’t)
“A “food desert” is defined as a populated area with deficient access to the most wellstocked outlets, the large stores or supermarkets that usually provide abundant, good
quality, low-priced food choices” (Hubley,2011)
“food deserts—low-income communities without ready access to healthy and
affordable food—by developing and equipping grocery stores, small retailers, corner
stores, and farmers markets with fresh and healthy food” ("USDA defines food,"
“Low access to supermarkets in the United States has been linked with poor quality
diets” (Hubley, 2011)
“predominately lower income neighborhoods and communities” (Hubley, 2011)
Cheapest food lacks quality and nutrition
Sedentary issues, binge eating, food
“Portion distortion”: “Food portions in America's
restaurants have doubled or tripled over the last 20
years, a key factor that is contributing to a
potentially devastating increase in obesity among
children and adults.” ("Portion distortion" 2013)
American portion size can feed 2 or 3 people
Huge in comparison to most other countries
respiratory functions: alters the
relationship between the lungs,
chest wall, and diaphragm (Ray).
coronary heart diseases: a condition
in which plaque builds up inside the
arteries that supply oxygen-rich
blood to the heart which narrow or
block the coronary arteries and
reduce blood flow to the heart
muscle which leads to chest pain
and heart attack (NHI).
heart failure: a serious condition
where the heart can't pump enough
blood to meet the body's needs
When did obesity emerge in the US?
What cause obesity to emerge in the US?
Doubled in children
Tripled in adolescence
60 million adults were considered obese, which is about 30% of the population
25% of Americans eat the recommended 5 servings of fruit and vegetables per
More than 50% of Americans do not get the recommended amount of physical
Obesity is dramatically on the rise in low- and middle-income
countries, particularly in urban settings.
In 2008, 35% of adults were overweight, 11% were obese.
More people are killed from being over weight than underweight.
Obesity is preventable. (WHO, http://www.who.int/topics/obesity/en/)
12. Key Issues
Health (see Effects)
Processed foods contain many low quality ingredients
Grocery store setup
Focus on children: TV, internet and movie adverts, collectibles (toys in cereal
boxes, fast food meals)
Identifying key buying
13. Key Stakeholders
Doctors/ plastic surgeons
Health insurance companies
Weight loss companies, gyms, trainers
and what we learned
Obesity is caused by a number of things that
vary from eating large proportions to the way
food is marketed. Our group concluded that
sources of obesity are preventable and the
cost of stopping them outweighs the cost of
letting obesity to continue to harm
Cawley, J. (2011). The Oxford Handbook of the Social Science of Obesity. Oxford University Press, Inc.: New York, NY.
CDC. (2010). Facts About Obesity in the United States. Retrieved from:
Chen, P. (2010, September 16). Teaching doctors about nutrition and diet. The New York Times. Retrieved from
Fromke, S. (Director) (2012). Escape fire: The fight to rescue american healthcare [Web]. Retrieved from http://www.escapefiremovie.com
González Jiménez, E. (2011). Genes and obesity: A cause and effect relationship. Endocrinología y Nutrición, 58(9), 492–496. Retrieved from
Hubley, T. (2011). Assessing the proximity of healthy food options and food deserts in a rural area in maine. Applied Geography , 31(4), 1224–
1231. Retrieved from http://dx.doi.org/10.1016/j.apgeog.2010.09.00
NHI. (2012, July 13). What Are the Health Risks of Overweight and Obesity? Retrieved From:
Pérez Rodrigo, C. (2013). Current mapping of obesity. Nutricion Hospitalaria, 2821-31. doi:10.3305/nh.2013.28.sup5.6869
Portion distortion and serving size. (2013, February 13). Retrieved from http://www.nhlbi.nih.gov/health/public/heart/obesity/wecan/eatright/distortion.htm
Ray C. S., Sue D. Y., Bray G., Hansen J. E., Wasserman K. (1983). The American Review of Respiratory Disease.
Rippe, J. M. Angelopoulos, T. J. (2012, May). Obesity : Prevention and Treatment. Retrieved from:
Rossen, L. M., Rossen, E. A. (2012). Obesity 101. Springer Publishing Company, LLC: New York, NY.
Saavedra, J. M., Deming, D., Dattilo, A., & Reidy, K. (2013). Lessons from the Feeding Infants and Toddlers Study in North America: What
Children Eat, and Implications for Obesity Prevention. Annals Of Nutrition & Metabolism, 6227-36. doi:10.1159/000351538
Stern, J. S., Kazaks, A. (2009). Obesity. ABC-CLIO, LLC: Santa Barbara, CA.
USDA defines food deserts. (2010). Retrieved from http://americannutritionassociation.org/newsletter/usda-defines-food-deserts WHO.
(2013). Obesity. Retrieved from: http://www.who.int/topics/obesity/en/
WHO. (2013). Obesity and overweight. Retrieved from: http://www.who.int/mediacentre/factsheets/fs311/en/
Willett, W. (2001). Eat, drink, and be healthy: The harvard medical school guide to healthy eating. New York, NY: Simon & Schuster, Inc.
Retrieved from http://www.health.harvard.edu/newsweek/Eat_Drink_and_Be_Healthy.htm | <urn:uuid:7b051805-ff1f-4e2c-bb45-82e0409d6d2a> | CC-MAIN-2015-27 | http://www.slideshare.net/lindypaul/obesity-case-studyhtm | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096738.25/warc/CC-MAIN-20150627031816-00188-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.866796 | 2,545 | 3.53125 | 4 |
Do you avoid conflict? Does conflict mean there is something wrong? What if you become a master at handling conflict?
Conflict is uncomfortable, and we often avoid it. Do we have the perception that conflict is bad? What if we could teach our children to handle conflict when they are young so that it is easier for them as adults?
I believe conflict is a natural part of life. There will always be people with different opinions and ways of being, knowing how to handle that is a very important life skill. Knowing how to stand true to yourself and honour another person’s point of view in conflict can actually strengthen relationships. It allows people to show up authentically often leaves people feeling happier and more fulfilled even when they can’t always choose the outcome. We just need to be taught how.
Learn how to:
- Stand true to yourself in the face of conflict
- Have those tough conversations and come out feeling satisfied
- Teach your children to have the courage to speak up when they are young in small situations
- Show your children how to practise these skills in small situations
When we teach our children how to handle conflicts when they are young, the consequences and pain can be far smaller. This has a huge impact on their confidence too.
To see a list of the upcoming workshops being presented, please click here. | <urn:uuid:a197afbc-de84-4304-b87d-b0b80215fc3a> | CC-MAIN-2022-33 | https://www.shiftingperspective.co.za/handling-conflict | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00283.warc.gz | en | 0.954379 | 278 | 3.34375 | 3 |
Nothing awakens the feeling of spring in a garden more than cheerful bird song. As snow and ice begins to melt, birds once again have a reason to sing, with the harsh winter over and difficult foraging a thing of the past. Most bird lovers begin slowly but surely to dismantle the birdhouses in their gardens – but wait! Even in the warmer months, we can be of vital assistance in helping our feathered friends find food.
Wild Birds: Feed and Nesting Boxes
© Ingo Bartussek / stock.adobe.com
What is meant by year-round feeding?
In the past, birds were only offered food and nesting boxes when the weather grew cold and frost was a certainty, but this thinking needs to be modernised. Conservationists and bird experts are in agreement that winter feeding should start in October and continue all the way through until May. This is to allow birds the time to find a suitable feeding place and to settle in properly before the cold arrives. You should also begin hanging nesting boxes fairly early on, ensuring that wild birds can find protection against unpleasant winter weather. Even in the warm months, you can support the birds in your garden by helping them to find food, which can be particularly vital during breeding season for keeping young well fed.
Which food is suitable?
In autumn and winter, you should provide birds with food that is rich in fat and energy. Fat balls are an ideal choice, containing energy sources such as sunflower seeds. May to September is the breeding season for birds, so choose a special summer food during this time. This can include simple garden bird food containing berries and animal protein from dried insects. There are also plenty of supplementary foods with cereal flakes and minerals to help with rearing young. All of this helps to compensate for any food shortages the wild birds may come across and increases chances of young birds surviving. You could offer this food inside an aviary, which also offers shelter to the wild birds.
Is there anything to look out for when buying bird feed?
Many birdseed mixes contain ragweed seeds, a North American plant that is becoming ever more present in Europe. However, many people are allergic to the ragweed pollen, which is known as the “asthma plant” and can often result in skin reactions. Make sure that any food you buy is labelled as “ragweed controlled”.
In the zooplus shop you will find a great range of bird feeding and nesting products to support wild birds all year round! | <urn:uuid:cc8f19e4-3c37-4ce9-9511-276e80a754a2> | CC-MAIN-2020-40 | https://www.zooplus.co.uk/magazine/bird/wild-birds/wild-birds-feed-nesting-boxes | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00152.warc.gz | en | 0.968781 | 514 | 3.3125 | 3 |
THE TRACK and trace system is just one of the ways the Government plans to pull us out of lockdown.
The theory goes that you can stop Covid-19 in its tracks, if you seek out and isolate those infected asap.
⚠️ Read our coronavirus live blog for the latest news & updates
The Government has been widely criticised for abandoning track and trace on March 12 - when they moved tactic from containing coronavirus to delaying the peak.
They argue it has worked in other countries like Singapore and South Korea, helping to keep infection and death rates down.
Now, Boris Johnson has pledged to have a "world class" system in place by June 1, to help ease lockdown measures further.
There are two key strains to the operation - human tracers, and the NHS app.
So far, 25,000 contact tracers have been hired to manually track down people who have been near others who have contracted Covid-19.
How does track and trace work?
Contact tracing works by stopping the spread of coronavirus through a community.
The idea is that by isolating an infected person you can stop the chain of transmission.
Experts have explained the premise using persons A, B, C, D and E as an example.
Professor Keith Neal, from Nottingham University, has 30 years experience of contact tracing and infectious diseases and says such programmes are in place to disrupt chains of transmission.
“We're already doing that in part by people working at home, people are actually socially distancing," he told Radio 4's Today Programme.
"If anybody is infected, they are meeting less people.
“Once we find somebody who has been infected the whole point of the contact tracing is identifying person A who has the infection, who will have passed it on to person B.
“Then B will have passed it on to C, C to D and so on.
“The idea is we identify B from A to stop C, D and E becoming infected and it all adds up to reducing the rate of infections across the country.”
Professor Neal added that if most people have followed the rules and have successfully been social distancing, then the tracing shouldn’t be “particularly difficult”.
He said person B is the key, and that by identifying them quickly - and asking that they self-isolate for 14 days - you can bring the virus under control.
High risk contacts are the priority
So far in the UK more than 36,000 people have died from the coronavirus, while more than 250,000 Brits have tested positive for Covid-19.
When the Government abandoned contact tracing in March, they quickly moved to introduce measures including social distancing to help delay the peak of the epidemic, and stop the spread.
Social distancing and hand washing remain the best tools in fighting the virus, scientists agree.
Prof Neal said social distancing can also help speed up contact tracing.
He said given the lockdown, most Brits are very aware of who they are mixing with - and in most cases it is just their household.
So, in most cases person B will be quickly identified as a family member, or house mate.
The NHS app, which is unlikely to be ready for June 1, is only really needed when people don't know who they've been in contact with.
Prof Neal said: "If you have gone to a supermarket that's where the app comes in.
"It will help to identify who you've been in contact with, when you don't actually know their name or phone number.
“The (the two strains of contact tracing) complement each other."
Prof Neal said track and trace can be done without the app.
Downing Street had originally planned to hire 18,000 tracers, but the opposition criticised the figure, saying it would be too small a team.
The app that will accompany the tracing is not yet ready and questions have been raised as to how the tracers can successfully do their job if the app is not yet up and running.
Downing Street said any contact tracing that requires clinical expertise will be carried out by staff with specific training if they have never worked in the healthcare sector.
"We are ensuring everybody gets all the relevant training they need before they start work."
They added that the aim to boost tracing to 200,000 a day by the end of the month "coincide" with the Prime Minister's promise of "having an effective track and trace system in place by June 1".
As part of the roll out 20,000 households will be recruited and routinely tested over 12 months.
Ultimately, 300,000 participants will be involved in the study.
"High accuracy" antibody tests will be used to understand how immunity could work in those recovering from the disease.
Those taking part will be swab-tested and asked questions by a health worker during a home visit.
The tests will be repeated every week for five weeks, and then monthly for a year.
How does the app work?
On May 8 the first app was launched on the Isle of Wight with a second app on its way.
The pilot on the Isle of Wight is still ongoing, Number 10 said today.
The app monitors when users come into contact with people who may have Covid-19.
It is able to identify people that the person using the app may not know personally, such as a bus driver, postal worker or supermarket staff.
Professor Neal added: "We can do contact tracing even without the app because that's the matter of finding the most high risk contacts - it's close and prolonged contact and you tend to and should only have close and prolonged contact with people you actually know".
The app will work by using Bluetooth to log when another user’s smartphone has been in close proximity.
If a person develops Covid-19 symptoms, they can report their symptoms to the app and immediately organise a test.
The tech automatically sends out an anonymous alert to other users they may have infected, urging them to self-isolate if necessary – thus stopping further spread.
They will then have the ability to book a coronavirus test.
Health Secretary Matt Hancock had previously said the Isle of Wight was chosen for the trial of the app due to it's population size.
On initial launch of the app residents complained of issues such as the app notifying them with multiple alerts and download difficulties.
Number 10 said they were still working on ironing out the issues and a spokesperson for the Prime Minister said: "The intention is to roll the app out in the coming weeks but I've also said there is certainly no requirement to have the app in order to have an effective trace and system, which the PM spoke about, in place by June 1."
CORONAVIRUS CRISIS - STAY IN THE KNOW
Don't miss the latest news and figures - and essential advice for you and your family.
To receive The Sun's Coronavirus newsletter in your inbox every tea time, sign up here.
To follow us on Facebook, simply 'Like' our Coronavirus page.
Get Britain's best-selling newspaper delivered to your smartphone or tablet each day - find out more. | <urn:uuid:b516457d-08b2-43d9-af18-e023160c4ddf> | CC-MAIN-2022-21 | https://www.the-sun.com/news/871458/how-does-coronavirus-track-and-trace-system-work-stop-spread/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00433.warc.gz | en | 0.968277 | 1,504 | 2.890625 | 3 |
A career is more than a job, and requires careful planning. Careers require specific skills, interests, experience, training and education. You can’t wake up one morning and be a doctor or a librarian without training and education. Research and career planning are the keys to preparing for your desired career. Whether you are a high school student or a mid-career professional who wants a career change, writing a career plan will help you achieve your goals.
Draft a career plan document, starting with the title at the top of “Career Plan for (your name)." The document can be as simple or detailed as you like, but should include main sections for a specific career goal; the requirements to achieve the career goal; a list of your current skills, interests and abilities; and a realistic plan to achieve the career requirements.
Choose a career. Read about careers in the areas you are most interested in, and the industries that employ people in those careers. For instance, if you love math and science and want to work in those areas, read about careers and industries that require strong math and science skills, knowledge and aptitudes, such engineering. Choose a field in which you have interests and passion and are willing to work to achieve the requirements for the career.
Set a career goal, such as “become a chemist” or “become a paralegal”. Write “Career Goal” as the first section under your career plan title, and write in your career goal statement after it. For example: Career Goal: Become a Licensed Registered Nurse. Elaborate on your career goal if you have more specific ideas, such as “My career goal is to become a paralegal and work for a large law firm in downtown Chicago."
Find out what’s required of people in the career you have chosen by researching. Start with the Bureau of Labor Statistics and O*NET OnLine, which provide information on job duties and educational and training requirements for hundreds of jobs. Look for current books about the career you want. High school students can find a lot of career information in their counseling offices or college planning centers. Find out what knowledge, skills, abilities, work activities and training and education are required. List these in a section titled “Career Requirements’ under your career goal section on the career plan document.
Create a section on the career plan under career requirements and title it “Self Assessment” or “Current Career Assets." List all of your current skills, abilities, interests, training and education. Use resumes, school transcripts and work records to help you list your current job skills and level of education. Personality tests can help you understand your personality type. Use this list to compare to the requirements list to identify the steps you need to take to prepare for your chosen career.
Create a section on the career plan under current career assets and title it “Next Steps” or “Career Bridge." List all the things you must do to make your career happen, such as “earn an associate degree in paralegal studies” and “seek internships at local law firms.”
- Writing a mission or vision statement at the top of your career plan helps to focus your ideas.
- Use your career plan as a working document. Regularly review it, checking off things that are completed, adding expected completion dates, or revising as necessary.
- Share your career plan with friends and family so they know your goals and can offer encouragement, advice and support.
- Career Plan; Laurence Shatkin
- MappingYourFuture.org: Sample Career Plan
- BusinessBalls.com: Personality Theories, Types and Tests
- Hemera Technologies/AbleStock.com/Getty Images | <urn:uuid:27a23dd6-46ba-4874-acdd-d068fca035c0> | CC-MAIN-2016-18 | http://work.chron.com/write-career-plan-6391.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111838.20/warc/CC-MAIN-20160428161511-00137-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.955282 | 783 | 3.125 | 3 |
Health & Safety
3D printing fumes or ultrafine particles carry an occupational hazard designation by several health and safety governing boards, deeming that these fumes hold potential health effects on the respiratory system. Most 3D printing processes utilize high variant thermoplastics and chemically induced materials. When these materials are heated, and/or fused together, they emit UFP fumes (3D Printing Fumes) that are microscopic to the human eye, measuring at 1/10,000 millimeter or sub-micron range. In a study done by NIOSH, the National Institute for Occupational Safety and Health, 3D printing materials, by means of PLA filament, that are utilized at a low temperature, can generate over 20 billion particles per minute; with ABS feedstock having the capacity to release over 200 billion, in that same scenario. These nanoparticles are very small, and can easily enter into one’s body via their respiratory, cardiovascular, and/or nervous system and be extremely harmful to one’s bodily function.
ABS is a synthetic compounded thermoplastic that is widely used for heavy type plastics such as LEGO, autmobile bumpers, and casings for electronics. Due to its sensitivity of changes in the temperature and the environment, it is highly recommended to use a 3D Printer Enclosure, to allow the ABS to cool down slowly after printer usage. Otherwise, if cooled to quickly, ABS can crack along layer lines, as well as curling and warping. In general, ABS can withstand more heat, pressure, and stress than PLA, which makes it an ideal element for wear and tear applications.
The graphic below shows the recommended, time-weighted-average [TWA] exposure limits for acrylonitrile, butadiene and styrene.
The building blocks of solid chemicals, polymer chains, become loose and disorganized when heated, a property that allows the polymer to flow through your 3D printer and release chemical ingredients and UFP [ultrafine particles] into the air. However, some filaments are made up of more than just one chemical; For example, the ABS filament is composed of acrylonitrile, butadiene, and styrene.
Multiple research experiments have found that ABS, when heated to temperatures ranging from 210C to 800C, without flames, produces 20+ chemical by-products, including: | <urn:uuid:0e51ef01-7cc2-4091-9cf0-4c0f6995e78b> | CC-MAIN-2021-17 | https://kentro.biz/air-purifier/3d-printing-fumes-hazards-extraction/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00124.warc.gz | en | 0.920221 | 486 | 3.578125 | 4 |
How do acoustic walls work?
Sound pollution is one of the most significant disturbances in our urban landscape. All sorts of factors contribute to this: ringing phones, electronic equipment, and open-plan offices.
Acoustic walls are made from acoustic panels and foam, which make them ideal sound barriers. They do a great job of reducing noise amplification.
Acoustic walls are made up of acoustic panels, consisting of a core made of sound-insulating material and an outer covering of porous fabric. This makes them perfectly designed to trap sound waves and control reverberations in enclosed areas. The porous outer layer allows sound to penetrate to the core, which then absorbs the noise.
Sound barriers like acoustic walls are excellent at reducing the amount of echo present within a room. This is especially important in larger places, where noise and echos have a more significant effect.
Acoustic walls help reduce noise reverberation, which reduces the overall sound level. This helps improve focus and create calmer surrounds, which allows you to maximise business efficiency and output.
High noise levels create an increased risk of distractions, which leads to lowered productivity. Using sound barriers to improve acoustics makes it easier for staff to concentrate, make and receive phone calls, and hold conversations without distracting other colleagues.
That is why acoustic walls are an excellent modern soundproofing option.
Are you having confidential meetings? Discussing important projects? Or just looking for privacy in your home or office space? Acoustic walls prevent voices from leaking outside or across rooms. They help muffle sound, allowing employees to hold private conversations without being overheard.
Acoustic walls also help block noise from travelling outside houses, affording you more privacy in your home and allowing you to avoid disturbing your neighbours.
High levels of background noise have been shown to increase stress by making it difficult for people to hear and concentrate. By using sound barriers to reduce noise levels, you create a more welcoming environment. Sound pollution can also damage cognitive functioning, listening capacity, and cause headaches and stress.
With acoustic walls, you avoid health issues related to over-exposure to excessive sound.
Improve workplace safety
In workplaces with noisy equipment or machinery, acoustic walls can help absorb unwanted noise and make other sounds clearer. This allows staff to hear and understand each other, helping them to communicate efficiently to avoid accidents.
They’re also great for meeting requirements set by noise control protocols. This is because they allow you to cordon off areas in factories, industrial sites, or locations with ongoing building work, reducing noise levels.
Improve sound quality
By absorbing unwanted sound like noise and echos, acoustic walls help to make other sounds clearer, improving the quality of sound in a location.
This makes them excellent sound barriers for recording studios, home theatres, conference rooms and meeting desks.
Looking for noise control solutions?
At Duraflex, we offer transparent acoustic panels with excellent noise reduction properties. Made right here in NZ, our sound barriers are available in custom sizes and are easy to install.
To speak to a member of our team and get a quote, contact us at 0800 111 783 | <urn:uuid:282bd52f-cb80-4a0e-9940-057237154f4d> | CC-MAIN-2021-31 | https://duraflex.co.nz/category/uncategorized/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00184.warc.gz | en | 0.920823 | 641 | 3.625 | 4 |
Chondromalacia patella, also called chondromalacia of the patella, condition in which the cartilage on the undersurface of the kneecap (patella) becomes softened or damaged. Classically, the term refers to pathologic findings at the time of surgery. It is one of several conditions that may be referred to as runner’s knee and is sometimes described as patellofemoral pain syndrome (pain around and behind the kneecap), though some experts consider the two conditions to be distinct. Chondromalacia patella is, generally, an overuse injury found in athletes with extrinsic anatomical abnormalities of the lower extremity. It can also be caused by an acute injury to the knee, such as in patellar dislocation or a direct blow to the knee. In the older population it is usually associated with osteoarthritis in the patellofemoral joint.
The knee joint consists of three bones: the femur (thighbone), the tibia (the larger bone of the lower leg), and the patella. The bottom of the patella and the ends of the femur and tibia are covered with cartilage. The cartilage allows the bones to glide smoothly over each other. The knee joint is often considered to have three compartments, areas formed by the joining of the femur and tibia (in two places: the medial [inner] and lateral [outer] compartments) and the joining of the kneecap and the femur (the patellofemoral compartment). The hinge action of the knee is controlled by the quadriceps mechanism, made up of two tendons that hold the patella in place and cause the knee to straighten and bend. The quadriceps tendon extends from the quadriceps muscle and attaches to the patella, and the patellar tendon (which technically is a ligament) attaches the patella to the tibia. The medial and lateral extensions of that tendon form the medial and lateral retinaculum of the patella.
Causes and symptoms
Chondromalacia patella can be considered an advanced form of patellofemoral pain syndrome, which is associated with abnormal tracking of the patella over the femoral groove at the lower end of the femur. Over time the cartilage on the joint surfaces of the two bones begins to soften and break down. The cartilage is often described as being fissured, fibrillated, or blistered. Conditions that can contribute to abnormal tracking are femoral anteversion (inward twisting of the thighbone), external tibial torsion (inward twisting of the tibia), genu varum (bowlegs) or genu valgum (knock-knees), foot pronation, patella alta (a kneecap positioned higher than average), increased Q angle (the angle measuring the relation of the femur and patella to the patella and tibia), and imbalance of the quadriceps muscles. A traumatic injury to the knee, such as a direct blow to the kneecap or recurrent subluxation (partial dislocation) of the patella, can also cause chondromalacia patella.
The symptoms of chondromalacia patella often come on gradually. Patients often complain of pain on the front of the knee that worsens after prolonged sitting, such as a long car drive or sitting in a theatre. The constellation of those symptoms may be referred to as the “theatre sign.” Other symptoms that patients will complain of are a grinding sensation, pain with walking up or down stairs, or pain when standing up from a sitting position. Standing after a prolonged period of sitting may result in stiffness as well as pain. It is not uncommon for patients to present with bilateral knee pain. And last, with prolonged walking or activity, some patients may complain of knee swelling.
The symptoms of chondromalacia patella can resemble those of other knee problems. Arthroscopy is needed to make a definitive diagnosis, although useful clues may be obtained from the history and physical exam as well as from imaging studies.
Test Your Knowledge
Apples and Doctors: Fact or Fiction?
On examination, patients will usually have pain with compression and rocking of the patella. They may also be tender on the undersurface of the patella and over the medial and lateral retinaculum. Patellar tracking abnormalities can also be observed while having the patient flex and extend the knee. If the examiner places a hand over the kneecap during flexion and extension, oftentimes grinding, or crepitus, can be felt.
X-rays looking particularly at the patellofemoral joint can show radiologic signs of arthritis that can suggest chondromalacia patella. For example, the presence of joint space narrowing or osteophyte formation on the undersurface of the patella could be indicative of chondromalacia patella. Magnetic resonance imaging (MRI) can show signs of fraying and cracking of the cartilage on the undersurface of the patella. Once the chondromalacia reaches grade III to grade IV, an MRI scan can reliably diagnose chondromalacia patella about nine-tenths of the time.
The progression of the condition can be graded once the diagnosis is made. Grade I is present if there is swelling and softening of the cartilage. Grade II will have fissuring as well as softened areas. At grade III the fissuring extends just short of the subchondral bone (the bone beneath the cartilage), and at grade IV the cartilage is destroyed down to the subchondral bone.
The approach to the management of chondromalacia patella almost always begins with nonsurgical treatment. Surgery is reserved for those patients who continue to have symptoms despite maximal nonoperative management.
The conservative approach to chondromalacia patella focuses on physical therapy and activity modification. Simple measures such as icing, using nonsteroidal anti-inflammatory drugs (NSAIDs), and reducing or modifying the activity that aggravates the symptoms can be instituted early in treatment. Patients typically also benefit from physical therapy that focuses on strengthening and balancing the quadriceps muscle. Often in patients with chondromalacia patella the vastus medialis oblique (VMO), one of the muscles that keep the patella on track, is underdeveloped and needs to be strengthened. In addition, stretching of the quadriceps, hamstrings, and iliotibial band can be helpful. Other approaches in physical therapy include patellar taping and patellofemoral joint mobilizations, or patellofemoral glides (movements of the kneecap in different directions by the therapist).
Bracing is often used by physicians for this disorder. The most common brace used is a patellar knee sleeve with passive patellar restraints plus or minus a patellar cutout. Those have not been shown to reduce symptoms. Another type of brace is a patellar brace with rigid patellar restraints. That type of brace has been shown to be beneficial only if the patient is not compliant with physical therapy. For those with anatomic abnormalities, such as flat feet, orthotics can be considered.
Other nonsurgical options that can be instituted are injection therapies, such as the injection of corticosteroids. In addition, viscosupplementation is often used in the management of patellofemoral pain syndrome and chondromalacia patella when physical therapy is not sufficient. Viscosupplementation entails the injection of lubricants or hyaluronic acid into the joint.
Conservative management options are usually successful in improving symptoms, but surgery may be indicated if a significant amount of pain or dysfunction remains. Arthroscopic surgery generally involves the surgeon smoothing out the irregular surface of the patellar cartilage. Any loose pieces or debris in the joint are then washed out. Some surgeons also then perform microdrilling or microfracture on the undersurface of the patella, which creates clotting and scarring that result in a smoother surface on the bone. For those with excess lateral tilt or pressure, release of the lateral retinaculum is often performed. Distal patellar realignment procedures are sometimes done if there are patellar tracking abnormalities.
Most patients who are compliant with conservative treatment do well as long as the chondromalacia is not too advanced. For those for whom conservative treatment is insufficient or who are noncompliant, surgery is successful in approximately 60 to 90 percent of cases. | <urn:uuid:57b22486-afee-4a0b-8a48-45fd76226b60> | CC-MAIN-2017-04 | https://www.britannica.com/science/chondromalacia-patella | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00562-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930189 | 1,828 | 3.5625 | 4 |
Do you ever catch yourself on the couch snacking and watching TV, only to realize after the two hour marathon of (insert your favorite show) you’ve devoured the entire bag of Doritos, plus? Often when we’re eating and doing another activity, such as watching TV, we’re not being mindful of the contents we’re putting in our bodies. We start off with good intentions of just having a small portion, snack size, of a food, but then it leads to mindless eating or snacking and before you know it, the whole bag is gone. We recommend enjoying what you’re eating by being mindful of what you’re eating. Really be aware of your senses while eating – pay attention to the color, smell, taste, and texture of the food or drink. What emotions are occurring while eating a particular food? We invite you to practice being present, in the moment, when you’re eating and be mindful of the connections you associate with foods
Do you eat because you’re bored or actually hungry? You might crave a snack or something small to tie you over to your next meal, but is that what you get? Listen to your body. If you aren’t hungry but need something to keep your mouth busy, try a hard piece of candy or sugarless gum. Fruits and vegetables make great snacks.
The U.S. Department of Agriculture recommends four to six meals a day. This includes three main meals and snacks in between meals. A snack should be between 100 – 200 calories. Pay attention to the energy density of the foods you are eating. Avoid foods that pack lots of calories and replace them with an equal volume of foods with fewer calories (nutrient dense calories). Here’s a good guide to follow:
· High Density (eat less of these): These are foods with 4-9 calories per gram of weight. Examples: crackers, cookies and high-fat foods like butter and bacon.
· Medium Density (proceed with caution): Foods with medium energy have 1.5 to 4 calories per gram of weight. Foods that fit here include hard-boiled eggs, legumes, dried fruits, bagels, jelly, whole-grain bread and part-skim mozzarella cheese.
· Low Density (go for it!): These foods range from 1.5 calories per gram or less. Examples: tomatoes, cantaloupe, broth-based soups, fat-free cottage cheese, plain fat-free yogurt, strawberries, broccoli, and lean meats like turkey or chicken breast. Most fresh fruits and vegetables fall into this category.
Here’s the challenge you’ve been waiting for – Get to know yourself!
By: Tatiana Burton | <urn:uuid:6b27ebf5-e72b-49d4-9246-b8e3a4ca6a59> | CC-MAIN-2017-26 | http://feverlacquer.blogspot.com/2012/11/mindful-eating.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00088.warc.gz | en | 0.936815 | 575 | 2.984375 | 3 |
Giving birth to a stillborn child means the death of the fetus in the womb before delivery. There is no sign of life in the child born. It is entirely different from miscarriage and giving birth to a living child.
Miscarriage is very different from having a miscarriage in the early stages of pregnancy and giving birth to a stillborn child. What week is stillbirth most common?
Death is the birth of a fetus at 20 to 28 weeks of gestation or later. But what is the reason? Why is this problem happening? Often the cause remains unknown. This problem occurs for various reasons.
What is stillbirth?
You may have heard in a few days that your unborn child has died, and it is not enough to say how much it is sad news. If the unborn child dies at 20 weeks or more after conception, it is called a stillbirth. Note that if the fetus dies before 20 weeks, it is called a miscarriage.
Of the approximately 160 pregnancies in the United States, an average of 1 stillbirth occurs. Most of the time, the baby dies in the womb before labor begins. However, in very few cases, stillbirths can result in stillbirths.
What causes stillbirth?
What causes a baby to be stillborn? In medical science, the exact cause of stillbirth is still unknown. But even then, there are some reasons that physicians have figured out subject to discussion with patients. The child can die in the womb for various reasons.
The child dies in the womb due to the poor health of the mother, excessive diabetes, high blood pressure, etc. But these reasons remain unknown. For which you may face the same kind of problem again later. Let’s see what causes the child to die in the womb.
1. Mother’s weak health
One of the reasons for giving birth to a stillborn child is the poor health of the mother. The health of a mother is the key to giving birth to a healthy child. If the mother’s health is weak, the child does not get proper nutrition and, as a result, dies in the womb. Can a stillborn baby live?
2. Uncontrolled diabetes
Maternal uncontrolled diabetes is a significant cause of fetal death. Many mothers do not know that she has diabetes. As a result, diabetes remains unregulated, and the child may die in the womb.
3. If fetal growth is impaired
If fetal growth is reduced, the child dies in the womb. Due to weakness and lack of proper nutrition, the fetus does not grow and dies. risk of stillbirth by week
4. High blood pressure of the mother
High blood pressure is one of the leading causes of death in children. Many mothers are not getting enough medicine due to excessive pressure. Again, even if someone takes medication, the pressure is not under control, as a result of which the child is dying in the womb. how common is stillbirth
5. Mother being over 35 years of age
When a woman is more than 35 decades of age, her fertility is relatively low. So even if the mother is more than 35 years old, many times, the child may die in the womb. stillbirth symptoms
6. If you take drugs during pregnancy
Taking drugs during pregnancy, such as alcohol, nicotine, or any other harmful drug can affect the baby and even kill the baby in the womb. can stress cause stillbirth
7. During pregnancy, sleep on your back and side
If the mother sleeps on her back after 28 weeks of gestation, the baby is more likely to die in the womb. what causes stillbirth at full term
8. If thyroid
If the mother has thyroid problems, the child often dies in the womb. Elevated thyroid hormone interferes with fetal growth. As a result, the fetus dies in the womb. what is the most common cause of stillbirth
9. Physical injury
Injuries to the mother’s body during or around the abdomen affect the unborn child and can kill the child in the womb. causes of fresh stillbirth
10. Congenital problems
Fetal infant mortality can also be caused by anatomical defects, including chromosome and genetic abnormalities. Sometimes a fetus can die due to several congenital disabilities. causes of macerated stillbirth
Maternal, infant and placental infections are one of the leading causes of fetal death. Mainly if the problem occurs within 28 weeks of conception, Fifth disease, Cytomegalovirus, Listeriosis, and Syphilis are some of the types of infections that cause fetal death. causes of stillbirth in late pregnancy
12. The accident caused by the umbilical cord
An accident caused by an umbilical cord can also kill the fetus, but this is very rare. When a knot is attached to this cord or is not well connected to the placenta, the baby may not get enough oxygen.
Many umbilical cord problems are also seen in healthy children. However, the chances of the baby dying just because of the umbilical cord problem are very low.
Other factors, such as complications, if the baby does not get the right amount of oxygen during delivery, or the mother’s trauma can also cause the fetus to die.
Also, if the baby’s pulse is ruptured in the womb, the mother’s blood group is positive, but if the father’s blood is negative, then the child in the womb is positive, but the child can often die in the womb. causes of stillbirth in india
How do you know if your baby dies during pregnancy?
If you reduce the movement of the baby in the abdomen, you have to understand that there is a problem. Usually do not understand on the first day.
Notice that before the child moves ten times a day, but suddenly it decreases six times, the next day three times, then it stops completely. That’s when you have to understand that something has happened to the child.
Usually, on the first day, when the movement is a little less, it is possible to save the child by going to the doctor and getting treatment immediately. leading cause of stillbirth
How to check if the baby is dead or not?
A pregnant woman may notice in a hurry that her unborn baby is not moving at all, so if she seeks the help of a doctor, the doctor may inform her that her unborn child has died.
Also, if a pregnant woman never realizes that her unborn baby is dead or not moving, she may find out from the doctor during routine checkups.
The doctor listens to the baby’s heartbeat with a hand-held ultrasound device called a Doppler. If the ultrasound shows that the baby has no pulse, he or she may recommend an ultrasound.
This will allow the doctor to make sure that the baby’s heart has stopped beating and that the baby is now dead. Sometimes ultrasound also reveals precisely why the fetus has died.
The doctor can also examine the blood from the baby’s body to find out the possible cause of the baby’s death. An amniocentesis test can also be done to find out if the baby’s death was due to a chromosomal problem. most common cause of stillbirth in us
How is a stillborn baby delivery?Overview
Can a dead baby be delivered normally? Due to health reasons, it is necessary to give birth to a stillborn child without delay, but if there is no health problem, it may be a little late to see if the contraction starts on its own.
However, during this time of waiting, the doctor will pay close attention to you so that there is no infection in your womb or blood clots somewhere.
Most women wish to conceive through induction labor or artificial insemination when they find out that their unborn child has died, whether it is a general delivery procedure or a local and general anesthesia procedure. They do not want to wait for a dead child in the womb. causes of stillbirth uk
If the woman’s cervix is not enlarged for childbirth, the doctor may prescribe medication through her genitals to begin the process. She is then given an iv infusion of the hormone Oxytocin (Pitocin) to start contracting the uterus. Most women can give birth in this way in the normal process.
Dilation and evacuation (D&E)
If you are in the second trimester of pregnancy and have the opportunity to see an experienced doctor, you can remove the dead baby from the womb through the Dilation and Evacuation (D&E) process.
In the process, she may be given general anesthesia or iv sedation and local anesthesia when the doctor expands her cervix and removes the dead baby.
The following conditions should be considered if there is an opportunity to go through the two types of delivery described above:
D&E is the right decision if one wants to finish this delivery process through a quick process. Also, if an experienced doctor performs this procedure, the chances of any complications arising are meager. However, the risk is shallow in both types of delivery.
However, inducing labor is the right decision for women who want to go through the normal process and the experience of childbirth and see and hold the dead child with their hands, even if it is to alleviate the pain of childbirth.
Also, the inducing labor method makes it a little easier to find out the exact cause of death by examining the dead baby after childbirth. causes of stillbirth at 24 weeks
What happens after a stillbirth?
Before going into the delivery process, parents and doctors should discuss exactly what will happen after the delivery of the dead child. If the parents want to see the dead child, hold his hand, and bury the child according to the religious method, they have to tell the doctor in advance.
The doctor can try to find out exactly what caused the baby to die. Initially, they will examine the placenta, it’s lining, and the umbilical cord immediately after delivery. They will then collect cells from it and test them in the lab, subject to parental permission, and perform various tests on the dead baby.
These can be difficult for parents who are grieving the loss of a child. Also, after so many tests, it may not be possible to know why the baby died in the womb.
On the other hand, parents can learn some vital information. For example, if the delivery of a stillborn child is not due to a genetic problem, then it is possible to be wary of the issues that occurred during the next pregnancy.
Or it may be that the problem is less likely to recur, as the current issue may be due to some kind of infection or a common congenital disability. It is essential to know this type of information before re-conceiving.
Doctors may tell parents what kind of information can be obtained by examining the body of a dead child, how to do it, and how much it can cost.
Parents who decide that they do not want a full autopsy can get some useful information by doing a few small tests. Some of these tests are X-ray, MRI, ultrasound, and cell sample testing.
The baby’s mother can also be examined for several health and maternity tests and try to find out the cause of the baby’s death by knowing the family’s health history. causes of stillbirth at 32 weeks
Mother’s treatment if the baby dies in the womb
If the baby stops moving in the womb for more than two days, the doctor will do some tests on you. Listening to the baby’s heartbeat can confirm whether the baby is alive or not. There is no immediate harm if the baby dies in the womb.
Labor pain usually begins two weeks after the baby dies. You can wait two weeks if you want and let the fetus come out regularly. But if the labor pain does not start after two weeks, you can’t wait any longer.
Because if the baby is in the mother’s womb for more than two weeks, it can bring various changes in the blood, which is very harmful to the mother. If there is no delivery within two weeks, artificial delivery is established.
Many do not feel comfortable having a dead baby in their womb for so long. In that case, the fetus is brought out by establishing artificial delivery. If the dead baby is the first child, it is not removed by cesarean section.
But if the mother has had a cesarean section before and this time the baby dies in the womb, then the dead baby is removed from the womb by cesarean section. At this time, the mother is bleeding a lot and becomes very weak. In that case, blood must be arranged in advance.
Once the baby dies in the womb, the risk of the next baby dying is 2.5%. So you have to take the next step with the advice of a doctor. The doctor will find out the cause of the death of the previous child and take action accordingly. what cause stillbirth in the third trimester
What causes some women to be a little more at risk of stillbirth?
Anyone can have an unborn baby, but some women are at higher risk. The risk of fetal death increases due to the following reasons: main cause of stillbirth
- If the fetus has died in the past or if intrauterine growth restriction has occurred during the previous pregnancy. It also increases the risk of stillbirth if there is premature delivery, pre-pregnancy hypertension, or pre-eclampsia.
- If there are chronic health problems such as lupus, hypertension, diabetes, kidney problems, thrombophilia (thyroid problems), or thyroid disease, the fetus is also at risk of death.
- If the complication increases during this pregnancy, such as intrauterine growth restriction, hypertension during pregnancy, pre-eclampsia, or cholestasis of pregnancy, the risk of fetal death increases.
- Smoking, drinking alcohol, or using harmful drugs during pregnancy increases the risk of fetal death.
- If there are more than two children in the womb.
- If the mother is obese, the risk of fetal death increases. most common cause of stillbirth
There are many other reasons for such problems. American women of African descent are at higher risk than other American women. Also, women who do not conceive after reaching old age are at risk of stillbirth.
Also, based on several pieces of evidence, experts say that in the case of In Vitro Fertilisation (IVF) or Intracytoplasmic Sperm Injection (ISCI), the risk of fetal death increases even if the fetus does not carry more than two babies.
Age also plays a vital role in fetal death. Both younger and older women have an increased risk of fetal death, which is much lower when they have children between the ages of 20 and 30.
However, in the case of girls under 15 years of age and women over 40 years of age, such adverse events are more common. Experts say that young girls are at higher risk due to their physical immaturity and lifestyle habits.
Also, most older women are found to be pregnant with chromosomal and congenital malformations, suffer from chronic problems such as diabetes and high blood pressure, and are more likely to conceive twins, which is one of the leading causes of fetal death. causes of stillbirth at full term
How can I reduce the risk of stillbirth?
If you are not yet pregnant, talk to your doctor before conceiving. With this, if you have any kind of complication, you will get the opportunity to get the full treatment of it before pregnancy.
Also, if you have diabetes and high blood pressure, talk to your doctor about how to get it under control before you become pregnant.
Also, tell your doctor if you are already taking any medications so that he or she can increase or decrease the amount as needed. And before taking any type of herbal medicine and medicine that can be bought without a prescription, talk to your doctor about how safe it is for pregnancy.
Take 400 micrograms of folic acid daily (you can take it with multi-vitamins and again). And it starts at least a month before you start trying to conceive. Taking folic acid can reduce the risk of congenital neural tube problems such as spina bifida.
If you are obese, try to lose weight before you get pregnant. But remember never to lose weight during pregnancy. Your doctor can help you figure out how to lose weight and get back to normal.
However, according to the Institute of Medicine, obese women should not be allowed to gain more than 11-20 pounds during pregnancy. causes of stillbirth at 25 weeks
Pregnancy conditions avoid smoking, alcohol, and taking harmful drugs. If you have trouble quitting such bad habits, you can participate in the drug rehabilitation process on the advice of your doctor.
One study found that if all women quit smoking during the first pregnancy, their risk of dying during the next pregnancy was significantly reduced.
If bleeding from your genitals occurs in the second or third trimester of pregnancy, seek medical attention immediately, as it may be a sign of placental abruption.
Seek medical attention immediately for other symptoms such as uterine tenderness, back pain, the onset of contractions or rapid onset of contractions, and decreased movement of the fetus.
Your doctor may ask you to count the amount of movement of the baby by 28 weeks. It is also possible to calculate the approximate time it takes for a child to move ten times.
If you notice less than ten movements in two hours or if you think the fetus is moving less than usual, seek medical attention immediately.
Also, if you notice any other symptoms of any problem during pregnancy, talk to your doctor without delay.
If you have already had a miscarriage (or are at risk of miscarriage for any other reason), you will have a variety of tests from the third trimester and routine monitoring.
Nonstress tests and biophysical profiles will be done to check if the baby’s heartbeat is normal. If these tests show that your baby will be safe from having a cesarean section, the doctor will recommend surgery.
Once a stillbirth is delivered, what is the risk of recurrence?
If the doctor can determine the cause of death of the fetus in advance, then the doctors will be able to say how likely it is that it will happen again.
If you still have health problems such as lupus, chronic hypertension, diabetes, etc., the risk is higher. Also, if a child dies due to pregnancy complications such as placental abruption, the chances of the same thing happening again are high.
However, it is true that even if an event like the death of your unborn child is not likely to happen again, it will be tough for you not to worry about it.
Before re-conceiving, consider all your health issues through a doctor. If you go to a new doctor, make sure he has all your previous information and test papers. causes of stillbirth at 34, 40, 36, 28, 33, 39, 23, 37, 20, 41 weeks
You can also contact a perinatologist (such risk specialist) or any other specialist as needed if there is any such specialist in your area. For example, if your baby suffers from a genetic problem, a genetic specialist can be consulted to find out if there is a risk of fetal death or any other pregnancy problem. causes of stillbirth in third trimester, most common cause of stillbirth | <urn:uuid:b56297bd-7509-434e-a3f4-cf3e15dd9af0> | CC-MAIN-2022-21 | https://softpipra.com/stillbirth-causes-definition/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00703.warc.gz | en | 0.951027 | 4,091 | 2.75 | 3 |
Man is a machine in the present era of modernization and development. There is a cyclonic change in the Human habits since the technological revolution. The active and dynamic life of the past has been narrowed to small cabin with laptop and Socialization is limited to social media. Stress and anxiety have become a commonly heard and discussed words. For the perks of modernization human race has definitely paid a huge price in terms of its health.
The comprehensive answer to this big mess is AYURVEDA. This ancient traditional science is special owing to its unique approach. The word Ayurveda means the knowledge of life. This medical science was supposed to be existing in India since 2000 B.C. The fundamental principles of this medical science are in very much tune with nature. Ayurveda advocates the relevance of five major elements earth, water, fire, wind and ether in human body. These elements existed before 5000 years and they do exist now also, hence there is no change in those fundamental principles.
Now a day’s health is perceived merely a commodity to be purchased whereas Ayurveda believes health as a way of life. A healthy life style is equivalent to preventive medicine. Most of the developed countries are spending a fair amount of their total budget on their National health services. The policy makers are laying emphasis on the prevention of diseases or complete health. And in this regard the concepts and principles of Ayurveda must be put forward and presented before the entire world.
Ayurveda primarily aims at maintenance of health of healthy and secondarily at the cure of diseases. It narrates the lifestyle in terms of healthy food, healthy living habits, healthy surroundings, proper behavioural regimen and code of conduct for an individual and a society. Unlike the various systems of medicine, like Western medicine, homeopathy or any others, Ayurveda is not a system of medicine but a science of life and longevity.
The fact is Longevity cannot be purchased, it has to be inculcated.
Following Ayurveda diet, lifestyle and herbal medicines are an appropriate answer to the menace of life style disorders propping up in the world. The basic principle of treatment in Ayurveda is removing the root cause of disease not just fixing the symptoms. Thus Ayurveda gives a long term solution than an easy quick fix solution.
The science of Ayurveda has stood the test of time and is still validated till todays date by several research based experimental studies. Hence Ayurveda is not just an ancient science but it is also a very scientific system of medicine. Awake early into the knowledge of Ayurveda and stay healthy. | <urn:uuid:53f2cf62-f80b-47ec-87bb-5d2eb3333d4d> | CC-MAIN-2020-16 | https://ayurvedaexpert.in/2018/01/21/relevance-ayurveda-present-era/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00272.warc.gz | en | 0.944071 | 544 | 2.765625 | 3 |
Steve Jobs and Bill Gate, both are extremely big name in this era of technology; they both are responsible for bringing revolutionary changes and advancement in science and technology. Their contribution to the world of inventions has made life easier for the whole population which people could never have imagined that these two will create such luxurious inventions. In 1976 Steve Jobs founded and invented Apple, he was the visionary behind this invention and had a very futuristic approach towards his invention. Bill Gates was the creator and inventor of Microsoft computers in 1975. Bill Gates had a very realistic approach towards Microsoft and he preferred thinking about the present unlike Steve Jobs who thought for the future. According to Forbes these two products Apple and Microsoft are the most popular among all.
Definition of Steve Jobs
Steve Jobs was born on 24th February, 1955. He was brought up in a modest way and did not attend that much school. Steve Jobs was the founder of the Apple in 1976 and also created the first apple computer. He had a vision in mind and that was to develop a computer which was affordable and also user friendly. He had a very autocratic style which also made people think he was arrogant and always thought what he did was right. He had a very futuristic approach, he was a hardware expert and he wanted to create something like he did a personal computer which the elite and mediocre alike could purchase and use it on everyday basis. Steve Jobs vision was so innovative that he used to think about inventions that would be desired by men in the future years. He wanted to make a better future and for that purpose, he worked hard in the present. He was known for being a hardworking man and also very demanding of his employees. The reason he could not incur and reap many profits was because the products he invented were not required by the people in the present and they could not accept it properly. When Apple built the Macintosh, it was not up to the people’s expectation because it was too ahead of time and hence could not generate the expected profits. But after some years Macintosh became the most amazing invention. His major achievements were iPod, iPad, Tablets, music players and iPhone. He introduced the technology that no one could ever think of. The youth are very much inspired by his inventions as he has used current technologies. He was diagnosed cancer and was prescribed rest by the doctors but he still continued to work more vigorously and as a result his cancer increased leading him to death. His death was a big loss to the world of technology.
Definition of Bill Gates
Bill Gates was born on the 28th October, 1955. He was a middle class person who wanted to study hard and hence went to Howard for further studies. He was very intelligent by nature. He was the inventor and founder of Microsoft in 1975. Microsoft was more pragmatic and sophisticated of course. His approach of inventions was more directed on the products which were needed at the present moment. He didn’t think ahead of time or was concerned about the future. His opinions were very realistic and he wanted to invent products to fulfill the needs of the present rather than the future. He wanted to give people what they could use and want in the present because he understood the fact that futuristic inventions would not interest people in the present and hence they would not use which will not make any profits. He wanted to build a company that would give him profit in the present and He did not believe in huge investments for future profits. He was more ecstatic in making money out of business, he was good at it and knew all the tactics required. A large sum of his money goes to many charitable organizations which makes him a great man. He is now the chairman of the Microsoft company and is one of the most richest and powerful men in the whole wide world.
Differences in a Nutshell
- Steve Jobs was the founder of Apple in 1976 while Bill Gates was the founder of Microsoft in 1975.
- Steve Jobs had a very futuristic approach towards his inventions and thought about the future whereas Bill Gates had a very realistic approach and thought about the present.
- Steve Jobs could not incur much profits and also went in debts because of his futuristic inventions while Bill Gates gained a pool of profits and became one of the richest man in the world and the chairman of Microsoft.
- Steve Jobs thought about long term profits while Bill Gates thought about short term profits.
There are many people in this world who make a name for themselves and get famous for something or the other. Two such ones are discussed in this article who have worked hard to achieve this. What they have done, how they have done, and what is the significance of it, will he explained here in this article. | <urn:uuid:ffc8ca5b-a0f6-486e-93b8-5c514ceae758> | CC-MAIN-2021-17 | https://www.difference.wiki/steve-jobs-vs-bill-gates/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00325.warc.gz | en | 0.99274 | 944 | 3.203125 | 3 |
Findings from a mouse study suggest that the Zika virus infection may have serious reproductive consequences for men.
Zika is a mosquito-borne virus that often produces no symptoms in men or women. Those who do have symptoms usually experience a minor rash, low-grade fever, pains, and red eyes. Running its course in a week or so, many people never suspect they are infected, especially during cold or flu season.
You've likely heard a lot about Zika because of its association with an epidemic of birth defects in infected pregnant women. The unborn child of a pregnant woman infected with the Zika virus is at significant risk of being born with a smaller than average head (a condition called microcephaly) and severe brain defects.
While Zika is well known for its impacts on pregnancy, recent mouse studies suggest that a Zika virus infection could also negatively impact the male reproductive system.
Named for the forest where the virus was first discovered in a rhesus monkey in 1947, Zika was not detected in humans until 1952. For more than 20 years, the virus quietly moved from Africa to Asia.
The first human outbreak was on the Pacific island of Yap in 2007. With 185 suspected cases, the World Health Organization (WHO) estimates 73% of the population of the island was infected with the virus over the next three years. No hospitalizations or deaths were reported as a result and the virus spread to other Pacific islands, French Polynesia, and then Brazil in March 2015. Since then, 48 countries and territories in the Americas have reported the Zika virus.
As of May 3, 2017, more than 4,900 Americans have been infected with Zika while traveling, according to to the Centers for Disease Control and Prevention (CDC). Additionally, 301 people were infected by the virus within US borders.
As of now, we know that Zika spreads:
- Through the bite of an infected Aedes species mosquito.
- By transmission during pregnancy from mother to child.
- Through infusion with blood products that contain the Zika virus.
- Through exposure in a healthcare or laboratory setting.
- By sexual transmission, even when no symptoms are present.
To avoid exposing a partner to the Zika virus through sexual transmission, the CDC offers the following guidance:
- Men who have been exposed to the virus, or traveled through an area of Zika activity, are advised to wait for at least six months before trying to get their partner pregnant.
- Women considering pregnancy are advised to wait at least eight weeks after symptoms start, or their last possible exposure to the virus, before trying to get pregnant.
The possibility that Zika spread through sexual transmission was documented first in 2008 when an American researcher working in Senegal returned to his home in Colorado; His wife was infected with the virus within weeks.
To study how the virus impacts the male reproductive system, researchers at the Washington University School of Medicine in St. Louis infected male mice with one of two mouse-adapted strains of Zika, and other mice with a related virus that causes dengue fever.
The Zika infection had pronounced impacts on the reproductive system of the mice, as reported in the journal Nature in October 2016, including:
- Within a week of infection with the virus, Zika was found within sperm cells and tissue responsible for sperm production, although the testicles still looked normal.
- By two weeks, testosterone levels and sperm motility declined.
- Zika infection caused tissue damage and testicular cell death that appeared to result in permanent shrinkage of the testicles.
- When paired with uninfected females, the reproductive damage in male mice resulted in reduced rates of pregnancy and healthy offspring.
- The dengue virus did not appear to target the reproductive organs of mice infected with that virus, unlike the Zika disease.
- Damage observed in the testicles appeared to result from Zika infection and a large inflammatory response to the infection by the mouse's immune system.
- Scientists believe the damage to the reproductive system, including reduced production of sperm and sex hormones, would be permanent.
We already knew that Zika infections persist in the sperm and male reproductive organs in humans for up to six months. Researchers continue to develop studies to understand the impact of Zika on male reproduction.
And to be clear: We can't say for sure if these mice-adapted Zika strains act as they would in humans. Plus, the mice were infected with very high quantities of virus that may not compare to the amount of virus when someone is infected through a mosquito bite. More virus could mean a more serious infection, a bigger inflammatory response by the immune system, and more severe effects of the virus.
In discussing this research and other studies looking at Zika and male fertility, Francis Collins, the director of the National Institutes of Health (NIH), said research is "now beginning to examine whether men who have been infected show evidence of similar damage and, if so, to what extent. They also hope to better understand why Zika specifically targets cells of the male reproductive system and whether the immune system contributes to causing the tissue damage."
Many important questions await answers from ongoing studies into Zika, its transmission, and impacts. | <urn:uuid:f100bee3-d007-4777-939b-3f63810e046d> | CC-MAIN-2020-05 | https://invisiverse.wonderhowto.com/news/even-without-symptoms-men-could-suffer-lasting-fertility-impacts-from-zika-0175331/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00221.warc.gz | en | 0.962335 | 1,047 | 4.03125 | 4 |
Thanks heaps mate, you're truely a saviour.Quote:
here is a better proof, i think.
you should know that an integer is even if you can express it as 2n for some integer n, it is odd if you can express it as 2n + 1 for some integer n. if a number is divisible by 6 (a multiple of 6), you can write it as 6n for n an integer, if its divisible by 4, you can write it as 4n for n an integer and so on.
I'll try to tone done the logic and formal math for this proof.
We want to show that if a number x is divisible by 4 and divisible by 3 then it is divisible by 6.
Proof: assume that x is not divisible by 6. then we can write x as 6n + r, where n is any integer and r is the remainder when dividing by 6. therefore, r = 1,2,3,4, or 5.
and so we have 5 cases.
case 1: x = 6n + 1 (that is when we divide x by 6 we have 1 as a remainder).
note that x = 6n + 1 = 3(2n) + 1, since 2n is an integer, it means x is not divisible by 3 (we have a remainder 1).
case 2: x = 6n + 2
notice x = 6n + 2 = 3(2n) + 2
that means we have 2 as a remainder when we divide x by 3, so x is not divisible by 3
case 3: x = 6n + 3
note that x = 6n + 3 = 3(2n + 1), so in this case x is divisible by 3. now we will check if it is divisible by 4.
notice that x = 6n + 3 = 2(3n + 1) + 1. this means x is odd, and therefore is not divisible by 4.
case 4: x = 6n + 4
so x = 6n + 4 = 3(2n + 1) + 1, so x is not divisible by 3
case 5: x = 6n + 5
so x = 6n + 5 = 3(2n + 1) + 2, so x is not divisible by 3.
so we see in all cases that when a number is not divisible by 6 it is not divisible by either 3 or 4 as well. so we can conclude the opposite is true, by way of the contropositive. | <urn:uuid:27a95820-024e-4938-8b3a-4973a5c1788b> | CC-MAIN-2015-22 | http://mathhelpforum.com/algebra/12251-help-number-problem-2-print.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928907.65/warc/CC-MAIN-20150521113208-00118-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.963248 | 542 | 3.03125 | 3 |
For all those who think of NASA’s plan of snagging an asteroid to dig out samples for further study as impractical, here are more details from the space agency explaining what it would do after detaining an asteroid.
NASA released a set of new photos and a video animation on Thursday, depicting its planned mission, announced earlier this year, to find, capture, redirect and study a near-Earth asteroid. The mock-ups depict crew operations including the Orion spacecraft's trip to the relocated asteroid, its rendezvous with the space rock and astronauts maneuvering through a spacewalk to collect samples from the asteroid before moving it to a stable orbit in the Earth-moon system.
“This mission represents an unprecedented technological feat and allows NASA to affordably pursue the Administration's goal of visiting an asteroid by 2025,” NASA said. “It raises the bar for human exploration and discovery while taking advantage of the diverse talents at NASA.”
According to the conceptual photos and video, NASA’s Orion spacecraft, with a two-person crew, will approach a captured asteroid after traveling through space for about nine days atop a heavy-lift rocket. The Orion will swing by the moon’s gravity to pick up speed and once the spacecraft reaches the asteroid, it will dock with the robotic capture vehicle that has hooked the space rock.
After the spacecraft is connected to the robotic capture vehicle, the two astronauts will use a translation boom to travel from the Orion spacecraft to the captured asteroid during a spacewalk. One of the astronauts will hold onto a mechanical arm, which will then be lowered by the other astronaut toward the asteroid to begin retrieving samples. Hundreds of rings will be affixed to the asteroid capture bag, which will help the astronaut carefully navigate the asteroid’s surface.
After storing the samples into a container, the astronauts will return to the Orion spacecraft. This process will be repeated for up to six days following which the crew will undock from the robotic capture vehicle and return to Earth in about 10 days.
“Part of President Obama's FY 2014 budget request for NASA, the asteroid initiative capitalizes on activities across the agency's human exploration, space technology and science efforts,” NASA said in a statement, adding that the agency’s asteroid mission will likely consider further alternatives in 2014.
On Wednesday, NASA announced that it would reactivate its Wide-field Infrared Survey Explorer, or WISE, an asteroid-hunting spacecraft, to identify potentially dangerous near-Earth objects and asteroids for future exploration missions.
According to NASA, WISE is expected to detect the size, thermal properties and other details of about 2,000 space rocks as part of its asteroid exploration initiative. | <urn:uuid:c7770724-b289-4265-b5af-0514a7110e92> | CC-MAIN-2016-07 | http://www.ibtimes.com/asteroid-mining-explained-photos-how-nasa-will-extract-samples-captured-space-rock-video-1396949 | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00324-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.920393 | 556 | 3.46875 | 3 |
Homelessness, substance abuse, mandatory treatment
For the government to be successful in addressing homelessness, it must focus on the link between homelessness and substance abuse. In New York City and elsewhere, advocates are reluctant to publicize the connection between substance abuse and homelessness. Federal laws and programs that attempt to deal with homelessness, such as welfare, Social Security, federal housing laws, and the McKinney Act and other various federal acts do not provide a comprehensive approach to treatment of those who are both homeless and substance abusers. Because the Supreme Court has held that the Constitution does not provide a right to shelter, advocates have turned to state court. Unfortunately, few state constitutions have constitutional language providing for a right of the poor to care and housing. Additionally, the substance abuse problems seen in the homeless complicate any state statutes. This Article proposes a federal solution, arguing that the only long-term approach with any chance of achieving real success is a national remedy that includes treatment for substance abuse as a condition attached to housing.
Melanie B. Abbott,
Homelessness and Substance Abuse: Is Mandatory Treatment the Solution?,
22 Fordham Urb. L.J. 1
Available at: https://ir.lawnet.fordham.edu/ulj/vol22/iss1/1 | <urn:uuid:f0c4a880-e6de-4942-b7eb-eeec813937bf> | CC-MAIN-2021-04 | https://ir.lawnet.fordham.edu/ulj/vol22/iss1/1/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00705.warc.gz | en | 0.935896 | 262 | 2.609375 | 3 |
The IPMN condemns all forms of racism, including anti-Semitism and Islamophobia. There is no room for such hatred in our society today.
History has shown that Jewish people have been persecuted and subjected to genocidal actions because of their religious identity. Christianity has been, and in some cases continues to be, a major culprit because of heresies that condemn Jews (along with peoples of other religious faiths) who do not convert to faith in Jesus Christ. In fact, one of the great ironies in the ongoing political debate about Israel is Christian Zionism (see here for definition) which supports a Jewish state today, but presumes a very anti-Semitic eschatology. For a good study of Christian Zionism, refer to Zionism Unsettled, published by this network and available here.
Since June 2014 there has been a good deal of conversation about the Presbyterian policy of divestment from three U.S. companies (Caterpillar, Hewlett-Packard and Motorola Solutions) that profit from non-peaceful pursuits in Palestine. Because many people blur the lines between Judaism, Zionism and Israel, and use the label “anti-Semitic” to shut down debate, we offer the following definitions and observations on these terms as basic tools for Presbyterians (and others) to use in constructive dialogue:
Jewish — a person of faith from Judaic tradition, or denoting a person of culture or ethnicity rooted in Judaism.
Israeli — a person who is a citizen of Israel. Not all Israelis are Jewish, and not all Jews are Israeli. Israel’s population is over 20% non-Jewish.
Zionism — A political ideology often attributed to Theodore Herzl that sees Judaism, and all Jewish people around the world, as a nation rather than as a religious or cultural affiliation. Zionists have many different opinions about the modern State of Israel, and many different political approaches to the question of Palestine. Not all Zionists are Jewish and not all Jews are Zionists.
Anti-Semitism — Though Arabs are Semitic people, this term has come to refer to hostility and prejudice specifically against Jews. Anti-semitism is a cultural system that consists of stereotypes of, misinformation about, and mistreatment of, Jewish people. The system of anti-semitism may use Jewish people as agents of oppression, by blaming other kinds of oppression - like racism, classism, and Islamophobia - on some inherent quality of Judaism or Jewish people. Those seeking to maintain the status quo of the Israeli occupation of Palestine are unjustifiably equating advocating for justice for Palestinians with anti-Semitism.
Since Semitic peoples cannot be called “anti-Semitic,” a new label has been invented for criticism of Israeli policies by Jews who are being called “self-hating Jews,” which illustrates that the divisions are based on ideology not ethnicity. Members of groups such as Jewish Voice for Peace, T’ruah (formerly Rabbis for Human Rights), and Open Hillel have been mercilessly labeled “self-hating Jews” for purely ideological reasons.
BDS and Presbyterian Policy:
Actions of Presbyterian General Assemblies and our policies regarding BDS (boycott, divestment and sanctions):
- The 219th General Assembly (2010) called for “the allocation of U.S. military aid funds to be contingent on compliance with [U.S. laws]” and “express its extreme disappointment with the U.S. government that while the State of Israel has been found not to comply with [U.S. statutes], it continues to be the recipient of U.S. military aid.” This policy enabled our PC(USA) Stated Clerk to sign a letter with leaders of 15 Christian denominations sent to Congress calling for the suspension of such aid. (ie, sanctions)
- The 220th General Assembly (2012) voted to boycott all goods manufactured in illegal Israeli settlements.
- The 221st General Assembly (2014) voted to divest from three U.S. companies that profit from “non-peaceful pursuits” in the illegal occupation of Palestine.
The PC(USA) still invests in dozens of companies that do business in Israel but are not profiting from non-peaceful pursuits. The PC(USA) boycotts only products made in Israeli settlements and not products from Israel. In regard to all three above decisions, even while our motivation is to achieve justice in Palestine, the most important witness we make as a church is that we simply do not invest the resources God has given us in ways that violate human rights or add to the oppression and collective punishment of a whole people.
The Presbyterian Church (U.S.A.) is blessed with a system of governance which allows all voices to be heard. Inevitably this means different views and opinions are expressed. Presbyterians have always prided themselves in making sure that our debate is accurately informed and transparent. The same process was followed for these votes as for marriage equality and all other decisions. The military aid overture in 2010 passed with a voice vote, and the boycott overture passed with a 71% majority. The closeness of votes on divestment (three vote margin against in 2012, seven vote margin in favor in 2014) demonstrates that Presbyterians have been seriously divided on the best witness to be made with denominational assets. The best way to combat divisions is to understand the facts, which until recently has not been easy because the Palestinian narrative is just now becoming known.
Through the work of our network, the denomination’s largest mission network (among almost 50 networks), The Presbyterian Church (U.S.A.) partners with both Christian and Muslim Palestinians who are committed to non-violent, peaceful solutions for seemingly intractable problems. IPMN is a true grassroots network, with Presbyterians from all walks of life, and Presbyterian Palestinians who know personally the pain of having been ethnically cleansed from their homes. IPMN has important partnerships with both American and Israeli Jewish peace and justice groups who say as Jews that their call is to stand for justice for oppressed peoples, especially the Palestinians.
Next time you hear charges of anti-Semitism leveled at those who are committed to non-violence, peace and justice for all people in the region, please give deep thought and prayer to the attack and the motivation for such defamation.
The Israel Palestine Mission Network of the PC(USA) invites you to study and to use the following resources exploring the meaning of anti-Semitism and the use of the term in the current discourse surrounding Israel / Palestine and the illegal occupation of the West Bank and Gaza.
This statement, from Palestinians living in historic Palestine and in the diaspora, rejects any form of racism, including anti-Semitism, as incompatible with the struggle for Palestinian rights.
A Moment of Truth: A word of faith, hope and love from the heart of Palestinian suffering, created by Kairos Palestine, is an invaluable document for understanding the occupation.. We direct your attention particularly to section 6.3 which states, "We condemn all forms of racism, whether religious or ethnic, including anti-Semitism and Islamophobia, and we call on you to condemn it and oppose it in all its manifestations."
Saree Makdisi, a professor of English and comparative literature at UCLA, offered these reflections in the L.A. Times on using charges of anti-Semitism to stifle campus debate. We urge you to be aware of the host of challenges faced by activists for Palestinian justice on college campuses, including this disturbing initiative to create a database targeting the most vocal activists among students and faculty, attempting to block them from future employment.
In this piece, Jewish Voice for Peace challenges the current State Department definition of anti-Semitism and explains how criticism of Israel does not equate with anti-Semitism. | <urn:uuid:437c7a56-eb97-4385-8a3d-11d0f815e2c3> | CC-MAIN-2019-18 | http://new.israelpalestinemissionnetwork.org/advocacy/statements?id=299 | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596541.52/warc/CC-MAIN-20190423074936-20190423100936-00189.warc.gz | en | 0.956782 | 1,601 | 2.59375 | 3 |
In the past two decades, the Hubble Space Telescope has produced thousands of staggering images of the universe — capturing colliding galaxies, collapsing stars, and pillars of cosmic gas and dust with its high-precision cameras. These images have driven many scientific discoveries, and have made their way into popular culture, having been featured on album covers, fashion runways, and as backdrops for sci-fi television episodes.
With Hubble’s advanced capabilities today, it’s hard to recall that the telescope was once gravely threatened. But shortly after its launch in 1990, scientists discovered a flaw that jeopardized Hubble’s entire endeavor. What followed was a political and public backlash against the $1 billion mission — and NASA, the agency that oversaw it.
For the next three years, engineers scrambled to design a mission to repair the telescope in space — an ambitious plan that would result in the most complex Space Shuttle mission ever flown.
“[Hubble] was never meant to be a suspense story,” Jeffrey Hoffman, a member of the original astronaut crew charged with repairing the telescope, said this week at MIT. Nevertheless, at the time, the future of Hubble — and of NASA itself — seemed to hinge on the repair mission.
On Dec. 2, 1993, Hoffman and six other astronauts aboard Space Shuttle Endeavour began an 11-day mission, named STS-61, that involved five spacewalks — the most of any shuttle mission — to restore Hubble’s vision.
This week, Hoffman, now a professor of the practice in MIT’s Department of Aeronautics and Astronautics, was joined by other members of the STS-61 crew in reflecting on Hubble’s rescue mission in an all-day symposium held in MIT’s Bartos Theatre. Talks and panel discussions — often with the air of a warm reunion — explored Hubble’s initial promise; its failure shortly after launch; and the planning, training, and execution of a rescue mission to fix the telescope.
The first inkling of a problem came during a NASA press conference held to present the first image taken by Hubble from space: The image, of a far-off star, appeared fuzzy. Scientists soon discovered a “spherical aberration”: Due to a defect in the manufacturing process, the telescope’s primary mirror had been ground too flat, setting its curvature off by less than the width of a hair.
“The unthinkable had become fact,” said James Crocker, then an optical engineer at NASA.
Once word of the defect spread, Hoffman recalled that NASA and the astronomy community experienced “a maelstrom of public opprobrium,” mainly circling around the same question: “How did you screw up so badly?”
To illustrate the public feeling at the time, John Logsdon, former director of the Space Policy Center at George Washington University, presented editorial cartoons deriding the mission with pictures of lemons in space and images of static, “courtesy of the Amazing Hubble Telescope.” Overall, Logsdon observed, public perception of the problem focused less on the defects in space than on the agency on the ground.
“NASA was very much at risk,” Logsdon said.
Preparing a fix
Following the discovery of Hubble’s defective mirror, engineers at NASA faced immense pressure to fix the problem. Crocker eventually experienced what he called a “eureka moment” in the most unlikely of places: a shower in Munich, where he had traveled to appeal to the European Space Agency for possible solutions. On a break in his hotel room, he was adjusting the showerhead — a European design that extends or retracts to accommodate one’s height — when an idea came to him: Why not outfit Hubble with corrected mirrors built on robotic arms that can extend into the telescope and retract into place, just like an adjustable showerhead?
NASA engineers ran with the idea, building the Wide Field and Planetary Camera 2, or WFPC2, to replace Hubble’s defective mirror. Getting the piano-sized instrument into the satellite required 11 months of training by Hoffman and six other astronauts, who spent more than 230 hours in a water tank, choreographing intricate maneuvers and learning to use more than 150 tools. Meanwhile, engineers tested and retested the instruments to be installed on the telescope.
Frank Cepollina, then NASA’s manager of space servicing capabilities, remembers that at the time there was “great turmoil in checking every socket and bolt.”
A spacewalk to save NASA
All preparations led up to Dec. 2, 1993, when the STS-61 crew launched. On the mission’s third day, the crew used the shuttle’s robotic arm to grab hold of the free-floating telescope, attaching it to the shuttle’s cargo bay, an event that prompted mission commander Dick Covey to announce: “We’ve got a firm handshake with Mr. Hubble’s telescope.”
The next day, Hoffman and payload commander Story Musgrave embarked on the mission’s first spacewalk, during which Hoffman, anchored to the robotic arm, replaced two gyroscopes on the telescope.
Astronauts Kathryn Thornton and Thomas Akers set out on the second spacewalk to replace one of the telescope’s solar panels, which had begun to list. After the astronauts disengaged the panel from the telescope, Hoffman remembers watching the array drift off into space, “like some prehistoric bird floating away — we were mesmerized.”
Hoffman and Musgrave performed the mission’s third spacewalk to swap out Hubble’s defective mirror with the 620-pound WFPC2 — the crux of the mission, and one that saw Hoffman anchored to the robotic arm, with Musgrave free-floating inside the telescope as Hoffman fed tools to him.
“It was a little like working under a car,” recalled Hoffman, who said the procedure was so complex that the shuttle crew had to talk them through each step. The procedure was a success, as NASA’s ground controllers found that the new mirror passed all its initial tests.
The remainder of the mission went largely according to plan, except for one hair-raising moment on the final spacewalk. On his previous outing, Hoffman had noticed that Hubble’s magnetometers, located at the very tip of the telescope, were flaking. To prevent more debris from possibly damaging equipment, pilot Kenneth Bowersox and mission specialist Claude Nicollier fabricated makeshift covers out of insulation to wrap around new magnetometers.
During the fifth and final spacewalk, Hoffman and Musgrave replaced the telescope’s magnetometers with the insulated upgrades, a maneuver that required removing screws and placing them in a bag while removing one instrument. In the process, a screw got away, floating free of the astronauts’ grasp. While seemingly harmless, the 3-millimeter screw had the potential to dent the telescope or the shuttle.
Hoffman, anchored to the shuttle’s arm, reached in vain for the screw, while Nicollier tried moving the arm farther out. But both the arm and the screw were moving at the same speed. In a spur-of-the-moment action, Bowersox reprogrammed the shuttle’s computer to reset the arm’s maximum speed, allowing Hoffman to reach the screw. From then on, the astronauts would refer to the escapade as “the Great Screw Chase.”
Since that first repair mission, astronomers have used Hubble to collect thousands of stunning images of the universe and make countless discoveries, with more than 11,000 published papers based on Hubble images. The telescope has undergone four more servicing missions to replace old instruments and add new capabilities.
Of Hubble’s future, Cepollina said: “As long as the telescope can collect photons, and we can provide next-generation instruments, we should keep truckin’.”
For the astronauts who rescued Hubble, disengagement from the telescope was bittersweet.
“It was a little sad to let the telescope go,” Bowersox recalled. “It was like saying goodbye to a friend. It was a great, magical time.” | <urn:uuid:4eae20f5-45ee-43b8-9122-5ede850b82dd> | CC-MAIN-2017-47 | http://news.mit.edu/2013/rescuing-the-hubble-space-telescope-1114 | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00209.warc.gz | en | 0.948398 | 1,743 | 3.421875 | 3 |
BC Hydro is reviewing its water ramping rates on Cheakamus River after reports of fish fry being stranded.
"We're aware of the recent stranding observation made by the public on the Cheakamus River and are looking at how best to address this in both the short and longer term," read an email from BC Hydro spokesperson Tanya Fish.
Ramping is essentially a change in discharge of water levels, in this case, at the hydroelectric facility at the Daisy Lake Dam, which can leave fish fry—small, young fish that are just beginning to emerge from their gravel nest—stranded.
"What ends up happening is that fish can't respond to that big of a change and get out of there quick enough. They're not biologically able to detect a really rapid change in water level," explained Chessy Knight, president of the Squamish River Watershed Society.
BC Hydro's ramping rates were established as part of its Water Use Plan, which was finalized in 2006 and includes requirements around ramping rates and monitoring the impact of flows on fish. BC Hydro said that, while it is "fully compliant" with the rates agreed to 12 years ago, "we understand the need to revisit these rates as they are a higher priority now than they were when the Water Use Plan was first developed," Fish explained.
BC Hydro also said it has undertaken "numerous studies" over the past decades monitoring the impact of ramping on fish in the Cheakamus River, a commitment that was mandated to end last year. "(However), we're continuing a select number of studies where we have identified the need for more data to inform the Cheakamus Water Use Plan Order Review, which is expected to commence in 2020," noted Fish.
One of those studies is set to go ahead this month, but fish advocates have warned that it's not the right time of year.
"When these large ramping rates are happening is the months of May through July, in years where we have a large snowpack and there is more water in Daisy Lake," wrote Sea to Sky Fisheries Roundtable member Dave Brown in an email to BC Hydro that he shared with Pique. "Once we get to August, the ramping rates become less of an issue, because the reservoir is lower at Daisy Lake."
Fry stranding is a difficult phenomenon to study, and there's little data out there specific to Cheakamus River, noted Knight.
"There's been a few studies, but there's nothing conclusive—and ramping can be very hard to study in the field," she explained. "You might go out and see one rapid drop in water level and see some stranded fry, but it really depends on the time of year and what species is there."
Conservationists have urged BC Hydro to take more immediate action to mitigate the effects of ramping on fish in the Cheakamus.
"Right now BC Hydro is doing ramping rates of large amounts of water in a very short time period. This results in fry stranding, which is something we all agree on," Brown said in his email. "If these ramping rates were done more slowly over a longer period of time, then fry stranding could be reduced."
Fisheries and Oceans Canada recommends ramping rates of 2.5 cubic centimetres an hour, lower than the rates agreed to as part of the Cheakamus Water Use Plan.
More timely than a fish monitoring study—which would result in more fry being stranded, Knight said—would be an operational analysis to determine if lowering ramping rates at the Cheakamus Dam would place a significant economic burden on the provincial utility.
"If it's not difficult for your plant to do and it doesn't cost you too much money, why not just implement a slower ramping rate?" Knight asked. | <urn:uuid:1d65c1a4-e902-481a-939b-3464cf0037fd> | CC-MAIN-2019-18 | https://m.piquenewsmagazine.com/whistler/bc-hydro-reviewing-ramping-rates-after-fish-fry-stranded-on-cheakamus-river/Content?oid=10295191 | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182858-00423.warc.gz | en | 0.979707 | 784 | 2.640625 | 3 |
Arthritis in Horses: Understanding & Treating Joint Disease
By Kentucky Equine Research
Have you ever heard a mature relative speak of Uncle Arthur? He usually arrives when a cold snap hits or after an exhausting day of raking leaves or tending to the vegetable garden. No, Uncle Arthur is not a distant relative to whom you’ve never been introduced. Rather, your relation is referring to an ailment that is well-known to nearly everyone: arthritis.
Horses, like humans, often must endure the uncomfortable, creaky movement that is characteristic of joint inflammation, more commonly referred to as simply arthritis. While it usually affects humans in the middle to late years, arthritis can develop in young equine athletes, sometimes keeping them from reaching their athletic potential. Research in arthritis has led to developments in joint management that can keep horses active and sound well into late maturity.
Dem Bones and Surrounding Tissues
To fully comprehend arthritis, joint anatomy must be clearly understood. A joint is the junction between two or more bones. The two primary kinds of joints are distinguished by the type of movement they allow. If a joint permits considerable movement, it is considered a synovial joint or diarthrosis; think knees, hocks, and fetlocks. If, on the other hand, a joint is restrictive and allows little or no relative movement, it is considered a synarthrosis. An example of a synarthrosis is the connection between a rib and the sternum.
The limbs of the horse contain primarily synovial joints. Structurally, a synovial joint is defined by a capsule whose walls consist of dense fibrous connective tissue. A network of blood vessels woven into the tissue provides nourishment for the joint. The walls of the capsule are lined by a thin tissue called the synovial membrane which secretes a thick lubricating fluid into the capsule.
The ends of contacting bones are capped with articular cartilage, a thick pad of tissue that helps absorb the force of movement. Unlike the joint capsule, the articular cartilage has no blood supply, so once damaged, it is unable to heal and rebuild.
Included in the mechanism are muscles, tendons, and ligaments, all of which help stabilize the joint and allow normal movement of a limb.
Photo: Pam MacKenzie
Exercise: a Double-Edged Sword
Everyday movement, such as that accomplished by free grazing or light exercise, keeps joints in fine fettle. Joints are nourished by synovial fluid as the articular cartilage compresses during the weight-bearing phase of a stride. As weight is shifted to other limbs, the compression on those articular cartilages is released. The repeated influx of synovial fluid ensures sound joints.
Demanding exercise, however, places undue stress on the joints and causes them to become inflamed. When associated with other body tissues, inflammation is oftentimes advantageous because it promotes healing. In a joint, however, inflammation is generally not helpful and actually causes serious problems if sufficiently severe. As inflammation progresses within a joint, the nourishing synovial fluid turns thin and watery, a distinct change from its normal syrupy state, and there is no way for cartilage cells to repair the damage.
Joint damage can be categorized into distinct stages: synovitis, degenerative joint disease, and osteoarthritis. The first stage of progressive joint disease is synovitis or inflammation of the synovial membrane. The primary cause of synovitis is overstretching of the synovial membrane during demanding exercise, though conformation, shoeing, and genetic predisposition may play a role. Pain and heat are probably present but swelling due to an increase in joint fluid production is the most obvious sign. This accumulation of fluid is called joint effusion. Windpuffs, pockets of fluid found around the ankles, are an example of joint effusion. They’re a common finding on racing Thoroughbreds and other horses that are worked strenuously day in and day out.
Synovitis can usually be calmed with a layoff, the length of which depends on the severity of the problem, and additional veterinary treatments. If the inflammation is ignored and training continues without a sufficient recovery period, damage to the cartilage surface may begin. This deterioration of the cartilage is termed degenerative joint disease (DJD) and is the next stage of articular breakdown. DJD is characterized by chronic, progressive degeneration of the joint cartilage and is found most frequently in the fetlock and knee, but is also diagnosed regularly in the pastern and hock.
Two primary processes lead to DJD: (1) repeated bouts of synovitis cause the quality of the joint fluid to decrease until it is watery and ineffective in protecting the cartilage; and (2) after recurring and excessive compression of the cartilage such as that associated with speed, landing after a jump, and quick stops, the once-smooth cartilage becomes rough and flattened, losing all ability to withstand compression.
The final stage of joint disease is osteoarthritis and it is distinguishable from DJD by one tell-tale sign: changes in the bones that comprise the joint. These changes severely impede mobility and soundness. The inflammation process goes largely unnoticed in most joints in most horses unless noticeable swelling is present. It is only when lameness occurs that horsemen usually become worried and a call is made to a veterinarian.
Photo: Pam MacKenzie
Diagnosing arthritis in horses usually involves a history of the horse’s workload (as complete as possible), a general physical examination, and a lameness evaluation. The athletic history of a horse often conveys significant information. Expect an in-depth probe by a veterinarian. Is the horse returning to training after a short or long break from exercise? Has the horse been subjected to a particularly difficult training session or competition? How many years has the horse been shown or raced at the current level? Has the horse experienced similar lameness in the past? If so, how long ago, and what was the duration of the unsoundness? The more information the horse owner can offer, the better.
A complete physical examination will yield clues to the whereabouts and degree of pain. After visual examination, the veterinarian will pinpoint the areas from which he or she believes the pain is radiating. A practiced eye is invariably essential in diagnosing lameness, but veterinarians rely on other senses to help locate the problem. Heat, swelling, and pain on palpation are also noted.
If the problem cannot be readily identified, a flexion test may be performed. This procedure involves holding a joint in a fully flexed position for 30 to 60 seconds and then immediately asking the horse to trot away. Long-term flexion usually worsens the pain and may make identification of lameness easier. In addition to flexion tests, a veterinarian might evaluate range of motion in an effort to localize the problem.
Radiographs generally do not uncover changes in soft tissues and cartilage. However, they can reveal certain clues that may help in diagnosing a diseased joint such as narrowing of the space between the bones, a finding indicative of cartilage erosion. There is a place for radiography in certain lameness evaluations. For example, radiographs are effective in showing any bony changes within or around a joint.
Another tool used by veterinarians to localize lameness is regional anaesthesia or nerve blocks. The veterinarian specifically desensitizes the joint that is thought to cause pain. If the nerve block temporarily eliminates any indication of unsoundness, the pain can be attributed to that site.
A thorough history and physical examination often lead veterinarians to the source of joint pain. Once the degree of damage is assessed, a proper course of action can be laid out.
What to Do Next
Various treatments are available for horses suffering from acute (sudden onset) and chronic (recurring) joint problems. A veterinarian is best to determine which treatment or combination of treatments is most effective. Treatments include rest and anti-inflammatory therapies such as cold-hosing and administration of non-steroidal anti-inflammatory drugs.
One avenue designed to support joint health is the provision of oral joint supplements. The three primary ingredients in oral joint supplements are glucosamine, chondroitin sulfate, and hyaluronic acid.
Glucosamine: Without glucosamine, few connective tissues within the body could retain their integrity. Though it is found in multiple soft tissues such as tendons and ligaments, glucosamine is most widely associated with joint health. It is a building block of chondroitin sulfate, a specific molecule that is vital for normal joint function. In a nutshell, glucosamine increases production of molecules that promote joint health and shuts down destructive enzymes that break down cartilage.
Chondroitin sulphate: Manufactured by cartilage-producing cells called chondrocytes, chondroitin sulfate stimulates the establishment of new cartilage within a joint. Molecules of chondroitin sulfate bind with destructive enzymes, rendering them ineffective and thus slowing the disease process.
Hyaluronic acid: In a healthy joint, hyaluronic acid is made by chondrocytes and cells in the synovial membrane. Its lubricating properties are essential for smooth, pain-free movement.
Glucosamine, chondroitin sulfate, and hyaluronic acid are often given separately to horses diagnosed with joint disease. However, some research indicates that a combination of oral glucosamine and chondroitin sulfate provides more relief than giving just one of the preparations.
Choosing the Best Oral Joint Supplement
Several key points should be taken into consideration when choosing an oral joint supplement. Here is a sampling of criteria to consider:
• Find a credible manufacturer. Research the manufacturer carefully. Does the company have a professional website? Does it have a nutritionist or veterinarian on board to whom you can speak directly about the product? Does the company have other products from which to choose? What other riders or companies are affiliated with the company?
• Read the label carefully. The product’s packaging can reveal much about the supplement and some of what must be shared is mandated by law. Two important elements of any supplement label are a guaranteed analysis that lists the minimum amounts of active ingredients and a complete ingredient listing. If you can’t find this information on the packaging, it might be wise to look elsewhere.
A veterinarian well versed in lameness is an incredible asset, and this individual’s opinion should be sought when selecting an oral joint supplement.
Arthritis is virtually unavoidable in horses that have sustained athletic careers. The aches and pains associated with exercise are simply part of the game. With proper veterinary attention, however, arthritis is manageable. Advances in the treatment of arthritis can slow progression of the disease and extend the athletic careers of horses.
Any in-depth discussion of a disease invariably yields a sometimes confusing list of multisyllabic words. Here’s a guide to those that will pop up while you’re reading this article or others related to joint disease:
• Arthritis: inflammation of a joint
• Articular cartilage: cartilage that covers the surface of bones forming a synovial joint; also called hyaline cartilage
• Chondrocyte: a cartilage cell
• Diarthrosis: a freely movable joint
• Joint effusion: the accumulation of fluid in a joint
• Radiograph: a permanent image, typically on film, produced by ionizing radiation; frequently called an x-ray
• Synarthrosis: an immovable joint in which the bones are united by intervening fibrous connective tissue
• Synovial fluid: a substance that lubricates joint surfaces and supplies the joint cartilages with nutrients
• Synovial membrane: the connective tissue that lines the cavity of a joint and produces synovial fluid
• Synovitis: inflammation of a synovial membrane
The Latest Research
By Melanie Huggett
Arthritis research is ongoing at universities and research centres around the world. Following are brief synopses of some of the most recent research on treating arthritis in horses.
• Researchers at the Ontario Agriculture College in Guelph, Ontario, found that a dietary neutraceutical composed of mussel, shark cartilage, abalone, and Biota orientalis lipid extract reduced inflammatory responses in horses injected with interleukin-1, a substance that causes inflammation similar to arthritis.
• A clinical trial with 74 horses found that a rosehip powder supplement had an anti-inflammatory effect and improved performance in supplemented horses. The study, led by Kaj Winther, MD, PhD, from the Frederiksberg Hospital at the University of Copenhagen, Denmark, concluded that rose hip preparations are indicated to decrease pain and inflammation, and improve mobility in horses with osteoarthritis.
• Beneficial effects of avocado and soybean unsaponifiable (ASU) extracts, the portion of oil that does not form soap after hydrolysis, were reported in a study by researchers at the Gail Holmes Equine Orthopaedic Research Center at Colorado State University. Led by David Frisbie, DVM, PhD, the study concluded that ASU significantly reduced the severity of joint damage and increased the synthesis of cartilage glycosaminoglycans.
• In a separate study, Frisbie and his colleagues at the Equine Orthopaedic Research Center found that extracorporeal shockwave therapy (ESWT) significantly reduced lameness in horses with osteoarthritis but did not alter the course of the disease.
• Studies with both horses and dogs have found that Omega-3 fatty acids have a direct anti-inflammatory action that could help treat osteoarthritis and lameness. A study with 109 dogs found that the levels of the non-steroidal anti-inflammatory drug (NSAID) carpofen needed to improve lameness was significantly less and occurred more quickly in dogs supplemented with Omega-3s. In a study on 16 horses with confirmed arthritis of the joints, Omega-3 supplementation was found to reduce inflammation in affected joints.
• Researchers at the University of Saskatchewan in Saskatoon, Saskatchewan, were able to effectively create a “rapid, sustained reduction in lameness” in horses with osteoarthritis of the distal tarsal (lower hock) joints by fusing the joint with 70% ethyl alcohol.
• Wayne McIlwraith, BVSc, PhD, DSc, FRCVS, Dipl. ACVS, director of the Orthopaedic Research Center at Colorado State University, recently presented the results of a study that found that bone marrow derived stem cells, in conjunction with microfracture, help heal cartilage defects in horses with osteoarthritis better than microfracture alone.
• IRAP-II, a form of autologous conditioned serum (ACS), was recently found to be superior to IRAP-I in treating horses with osteoarthritis. ACS involves pulling blood from a horse and incubating it or 24 hours with glass beads to stimulate production of anti-inflammatory proteins such as IL-1Ra. Frisbie and his colleagues found that both IRAP-I and IRAP-II yielded increased levels of IL-1Ra, but IRAP-II yielded more than twice the amount, which theoretically would magnify beneficial effects to the horse, as well as produced less tumor necrosis factor-a, a pro-inflammatory cytokine.
Main article photo: Robin Duncan Photography - Equine athletes may develop arthritis as their careers advance due to the stress placed on their joints during training and competition.
This article was published in the February 2011 issue of Canadian Horse Journal. | <urn:uuid:4b4ecfb4-151d-4bcf-bd25-277c57233a91> | CC-MAIN-2017-17 | https://www.horsejournals.com/arthritis-horses-understanding-treating-joint-disease | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00140-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.937596 | 3,287 | 3.1875 | 3 |
The Influence of Cutting Interval on Alfalfa in the High Andes1
- Loy V. Crowder,
- Jaime Vanegas and
- Jose Silva22
Percentage of flowering was not a reliable guide for alfalfa harvests since plants remain vegetative throughout the year and appearance of flowers is sporadic and erratic. Clipping alfalfa when new shoots were 2 inches high resulted in high quality forage and hay yields equal to those from other cutting frequencies. Many plants had not reached apparent physiological maturity when clipped every 5 or 7 weeks, but many leaves were lost when cutting was delayed longer.Please view the pdf by using the Full Text (PDF) link under 'View' to the left.
Copyright © . . | <urn:uuid:a5940780-7f6d-4cae-978e-01d55295d680> | CC-MAIN-2016-44 | https://dl.sciencesocieties.org/publications/aj/abstracts/52/3/AJ0520030128 | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00312-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.937644 | 155 | 2.578125 | 3 |
Healthcare wearables development is accelerating and we’re unlocking the insights the devices provide to help resilience against pandemics like COVID-19.
We are in the midst of what, for many of us, is the first global pandemic we have experienced, and it has completely changed the way we live. The virus named SARS-CoV‑2 by the World Health Authority (WHO) that causes the medical condition entitled COVID-19 is invisible, highly infectious and for about 2% of infected people it is deadly. For those of us who are unlucky enough to contract this disease, there are also unknown long-term consequences with some who recover reporting blood clots or inflammatory complications. It is therefore of the upmost importance to reduce the spread of the disease to not only save lives but to reduce the financial burden this virus is having on the world economy.
A key strategy employed by public health authorities worldwide is to maintain social distance to reduce the transmission of the virus coupled with testing to understand the spread of the disease. However, there are a number of companies and researchers worldwide who are turning to wearable technologies to measure monitor and curb the spread of COVID-19
Wearable technology or wearables are a class of electronic device that can be worn on the body, often on, or close to the skin. Typically, they form part of a system using sensors to analyse signals from the body or from the user’s environment and transmit this information to the cloud for data aggregation, detailed analysis and to provide insights. If we consider healthcare wearables, they form a continuum from simple activity trackers at one end, though to complex medical devices at the other. They can perform a myriad of functions supporting lifestyle improvements through feedback on steps counted and activity levels as well as to measure factors such as body temperature, heart rate, blood pressure and even blood analysis.
COVID-19 and wearable technologies
As wearables are able to measure signals from the body, a number of companies and researchers are modifying their existing wearable tech or developing new wearable devices to support the detection of COVID related symptoms and potentially gather large volumes of real-time data to track the prevalence and course of the disease.
Temperature tracking labels
In May 2020, Identiv, Inc, in partnership with NXP Semiconductors, announced the development of a number of Body Temperature Measurement Patches. These devices which deploys flexible hybrid electronics they have skin friendly adhesives and are intended to be worn, on the skin, under a person’s upper arm. The first version which has an embedded temperature sensor and wireless near field communication (NFC) will measure temperature instantaneously when an NFC enabled mobile phone is placed in close proximity to the device. This will allow the wearer to get a quick measurement of their temperature but also allow others, such as those managing public venues, to get a simple, contactless, reading of temperature. The second device they have developed is for clinical-grade applications and it enables longer term body temperature tracking through the integration of a flexible battery. This data can be stored in the cloud through interaction with an NFC enabled connected mobile phone and can be used to rapidly measure if a person’s temperature increases signalling a potential infection and alerting them, and/or public health to take the appropriate next steps, such as isolating that person to prevent further transmission.
The US company, Blue Spark Technologies have developed a disposable self-adhesive temperature tracking label. It is adhered to the skin and measures and records body temperature for up to 72 hours. This data can be wirelessly transmitted in real-time over this period through a Bluetooth connection to a mobile phone or Bluetooth hub. It was originally developed to measure the temperate of patients within hospitals enabling easy aggregation of data across a healthcare setting and reducing the need to take measurements directly from the patient. Deployed already in hospitals worldwide for this purpose, Blue Spark announced recently that they are using this wearable system to also support frontline care workers within healthcare settings. The patch is being trilled one of Ohio’s health care systems, University Hospitals (UI), and can remotely monitor caregivers’ temperatures through a dashboard with little-to-no direct contact so clinicians can deliver care to patients without interruption. Any spikes in temperature can be an indication that the caregiver could be suffering from an infection and can be therefore isolated or given care as appropriate, reducing the risk of further transmission and also enabling the clinician to receive support more rapidly.
These two devices both rely on the under-arm position of the patch providing reasonably accurate body temperature, however measuring a core body temperature is difficult to achieve. The Swiss company greenTEG is taking another approach, using a combination of temperature and thermal flux. These measurements are sensitive to not only the skin temperature but also the heat flow out of the body. By modelling the thermal properties of the body, from these two measurements and accurate core body temperature can be calculated. This measurement method has significant industrial applications in addition to human body measurements.
The New York (US) based start-up Estimote develops Bluetooth location beacons and have adapted their technology to reduce the spread of COVID-19 within a workplace environment. They have produced a series of wearables that they are calling “Proof of Health” wearables that aim to provide contact tracing to monitor the potential spread of COVID from person to person. Using GPS technology that works indoors coupled with Bluetooth beacons in the building and on each of the wearables, their technology is able to measure where a person is and how close they are with others. They also claim to be able to identify how long people have been in close proximity. If a member of the team is found to be COVID positive it should be a simple matter to look at the data and identify which individuals have been in contact with the infected person and take action as required.
A study published in the Lancet in January 2020 gives evidence that identifying deviations in a person’s resting heart rate can be used to indicate infections such as influenzas. The study employed the data from Fitbit wrist worn heart rate sensors for over 200,000 people in the USA and has paved the way for wearable devices to provide an early warning sign of infection. Building on this, researchers at West Virginia University Rockefeller Neuroscience Institute have announced that they have developed an AI enabled digital platform that can detect COVID symptoms up to three days before they appear as a fever, cough etc. They do this by using data obtained by the Oura ring which is a finger worn wearable fitness and activity tracker. Separately, Scripps Research has started a study entitled “DETECT” to analyse participants’ wearable health data, including heart rates, sleep and activity to more quickly detect the emergence of viruses such as SARS-CoV‑2 (COVID). They can aggregate data from smartwatches such as Apple Watch, Fitbit, Amazfit or Garmin watches to enable early illness detection enhancing the ability to track and respond to disease outbreaks.
According to the Financial Times, in Lichtenstein, they are currently testing wearable bracelets on 1 in 20 of their citizens with the bracelets offered to the whole population by the Autumn. The bracelet is provided by Ava, a Swiss medtech company with the original use of accurately monitoring fertility cycles in women. As with the solutions described above, the data from these bands such as heart rate etc will be used to predict infection as early as possible enabling action by the authorities if needed.
A very unexpected symptom of COVID-19 has been patients presenting with extremely low blood oxygen levels, even without other symptoms like fever or cough. Measurement of saturated oxygen is an extremely routine measurement, typically done through a finger clip. Several companies are now looking at the possibility of wearables for the continuous O2 SAT measurements for outpatients and the general public, for early detection of these extraordinary low levels.
Development of wearables for healthcare
At CPI we have broad expertise in the technology required to develop wearables for healthcare including those which would be classed a regulated medical device. We have expertise in printed and flexible hybrid electronics, a method of combing printed components such as sensors, conductive traces or antennas alongside rigid components such as silicon chips and batteries onto a flexible substrate. We can even work with textiles to produce smart fabrics and clothing. This enables significant freedoms of design as well as imparting light weighting thinness and flexibility which is not possible with conventional electronics. Sensing is also one of our areas of expertise and we have worked with companies to develop printed pressure and strain sensors as well as some types of biosensors, when coupled with a wearable device this could enable significant information to be derived from biomarkers. For optical sensing, we have expertise in developing (light-based) photonic solutions to enable measurements such as heart rate, blood pressure and blood oxidation.
Wearables offer a way to give us more information about our bodies and through modern data aggregation techniques coupled with AI, we are learning more about ways to get insights from this data, we at CPI are here to support the development of your next wearable device, or components within that device to enable additional measurements to be taken or make that wearable more lightweight or unobtrusive.
The world of wearables is accelerating at a high rate and their value is really starting to be unlocked. If we embrace wearables en-masse, as they are doing in Lichtenstein. Early detection of COVID-19 coupled with track and trace systems is key to stopping this global pandemic, wearables clearly have a part they can play in this.
Enjoyed this article? Keep reading more expert insights...
CPI ensures that great inventions gets the best opportunity to become a successfully marketed product or process. We provide industry-relevant expertise and assets, supporting proof of concept and scale up services for the development of your innovative products and processes. | <urn:uuid:5dd42721-47c9-4f8d-b93a-e7f5ea79d220> | CC-MAIN-2020-40 | https://www.uk-cpi.com/blog/wearables-for-healthcare | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00737.warc.gz | en | 0.955154 | 2,027 | 2.765625 | 3 |
“We all knew what “viable” meant in Bill’s lexicon. It meant somebody who saw the world as we did. Somebody who would bring credit to our cause. Somebody who, win or lose, would conservatize the Republican party and the country. It meant somebody like Barry Goldwater.” – Neal B. Freeman
First coined by William F. Buckley in 1964, ‘the Buckley Rule’, has been asserted by Conservatives time and time again in favour of one candidate over another. To support “the rightward most viable candidate” has become religious doctrine to the fusionist Right. The Buckley rule favours didactic measures out of principle as opposed to unprincipled success. In other words, it views power as a means to an end (a means to further a cause), while those opposed to the rule see power as an end in itself.
In the case of 1964 and Goldwater, the Buckley Rule was testament to affirming the existence of a Conservative wing of the Republican Party. Nominating the traditionalist, anti-Communist, and Libertarian Barry Goldwater for the Presidency was sure to result in a loss for the GOP. But it was also sure to pull the Republican Party to the right. Seeing the utility of having a stronger party in years to come, Buckley and National Review pushed for the nomination of Goldwater.
Buckley, himself, was not unfamiliar with didactic political efforts. Just a year later, in 1965, he ran for Mayor of New York City against Democrat Abraham Beame and Republican John Lindsay – an impossible race to win – thus only receiving 13% of the vote. Everyone, including Buckley, understood that he had no chance at winning… and that is exactly why he ran.
To oppose someone with whom you disagree with – even if that person is in your own party – was an effort championed by Buckley. John Lindsay, the victor of the race, was a liberal Republican cut from ‘Rockefeller cloth’; Rockefeller, who Buckley had opposed in 1964, could be credited as the inspiration necessary for the conception of the Buckley rule. Lindsay was both exceptionally electable, and a Republican. And yet, William F. Buckley was not satisfied with a Mayoral race in which, as he said, there was no discernible difference between the Democratic and Republican candidate. See above, both literally and metaphorically, Beame sits to the left, Lindsay to the centre, and Buckley to the right as they debate in 1965.
So why does this matter?
In recent years, the Conservative…. ’cause’ in Canada, let’s call it, has taken a backseat to the interests of the Conservative Party. Harper, for all of his accomplishments and accolades, pulled Left on the rare occasion in the interest of electability. Take the Harper government’s positions on abortion and gay marriage as an example. For better or for worse, there was a refusal on the part of Harper’s government to act on issues that much of its constituency felt strongly; and instead, courted moderates to further their cause.
And in the case of 2017, in the midst of a 14 candidate leadership election, there will be those who will push for the most electable candidate to take the reins of the Conservative Party. Michael Chong, for example, is a strong candidate and indisputably electable. He could beat Trudeau in 2019. But, like a true ‘Rockefeller’, a true ‘Lindsay’, and a true moderate, Chong leaves much to be desired to Conservatives.
In the first leadership debate, Chong remarked that by living in the Greater Toronto Area, he understands how to win in cities. Well, yes. So did John Lindsay and Nelson Rockefeller; by pulling their party to the Left. It is not as if there is a magical button that one can push to suddenly win left-wing voters without appealing to them. Chong has plans to reform the Conservative Party likely in the same vain as Patrick Brown, leader of the Progressive Conservative Party of Canada: by abandoning Blue Tories for Red ones. Of course, Michael Chong has already begun this crusade. In the same debate, he made clear that he favours a carbon tax. Fellow candidate Steven Blaney responded in kind, “a tax is a tax is a tax”.
There is an equally valid electability argument to be made in Kevin O’Leary, should he decide to enter the race. If he so chose, he would instantaneously become the most well-known of the candidates in the race. But O’Leary is no Conservative. A year ago, he had no interest in the Conservative Party. It is only when an opportunity for power arose that O’Leary even bothered to join the party. ‘To take on Trudeau the celebrity, why not choose a celebrity of our own?’, some will contest.
Because of the Buckley Rule, and because of Maxime Bernier.
Of the 14 candidates, 10 or 11 seem to be saying the same thing. Of these ten, all are either members of the liberal wing of the Conservative Party or, slightly to the right, in the Harper-esque establishment bloc. The outliers – Brad Trost, Pierre Lemieux, and Maxime Bernier. Now, this is not to say that these are the only three ‘Conservative’ candidates of the Conservative Party. For example, Andrew Scheer, Andrew Saxton, Erin O’Toole, and a few others are great candidates, and great public servants. If it came down to it, I would happily vote for any of the six or seven consistent candidates in the running.
What is most impressive regarding Trost, Lemieux, and Bernier is that the former two are unapologetically socially Conservative while Bernier is a staunch government critic. The three seem to form an unspoken Canadian version of the Tea Party. Now, again, that is not to say that they are the only ones worthy of this praise; there are many others in Canadian politics, some of which are also current contenders for the leadership. However, all three have critiqued their own party very publicly in the past. The anti-establishment streak of Trost, Lemieux, and Bernier is very attractive and worthy of praise.
Despite this, the three candidates are not one in the same. Meaning, they do have differences. To analogize the three in relation to American politics, Brad Trost is very Ted Cruz-esque; Lemieux shares more with Mike Huckabee than anyone else, I would contend; and Maxime Bernier is a Canadian incarnation of Rand Paul.
Why Bernier over the rest?
In the interest of achieving long-term change within the Conservative Party, and thus Canada, Maxime Bernier seems most apt to carry the torch. Where Trost and Lemieux emphasize tradition, Bernier emphasizes freedom; and on its face, freedom is a more appealing dish. By applying the Buckley Rule to the Conservative Party, Maxime Bernier is the clear Goldwater, and the best possible choice for leader.
“But what if Bernier does not win when O’Leary or Chong could have? Is it worth another Trudeau government?”
In all brutal honesty, I am not convinced that an O’Leary or Chong government would even differ from a Trudeau government. The two have simply not made strong enough cases to convince me otherwise. If there is no difference between the Conservatives or Liberals, then what is the point? Barry Goldwater lost to Lyndon B. Johnson in 1964, and it was not until 1980 that Ronald Reagan finally won the campaign that Goldwater started. By 1980, the Conservative case had been perfected, their arguments were strong, and their principles untainted.
I am not predicting defeat in 2019 with Bernier. I think he would win. But even if he was to lose, an opposition government operating from the political positions that Maxime Bernier represents would be a wall that would force the Liberals to compromise if they wish to govern. A Chong or O’Leary opposition, on the other hand, may be more likely to make compromises rather than critiques, and that is the worst outcome possible. Not to mention that political campaigns start movements, regardless of a win or loss. And a Bernier campaign could kickstart a Conservative movement in Quebec and with young people (see Why Conservatives Should Support Maxime Bernier).
In the coming months of the 2017 Conservative Party Leadership Election, Canadians should remember the Buckley Rule. Modern Conservative Journal will be. | <urn:uuid:df3c2e81-e568-4cb0-96c5-56c6ec44b7ca> | CC-MAIN-2017-51 | https://modernconservativejournal.com/2017/01/08/the-buckley-rule-and-maxime-bernier/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00552.warc.gz | en | 0.961535 | 1,772 | 2.71875 | 3 |
The music set the rhythm, style of play and energy of the game. Three main style of song weave together the structure of the capoeira roda and it understood on a subconscious level bringing together group cohesion and dynamic. The music of Capoeira creates a sacred space through both the physical act of forming a circle (the roda) and an aural space that is believed to connect to the spirit world.
-Drums (the atabaque or conga of Yoruba candomble)
-Berimbau (whose ealier forms were used to in rituals in Africa in speaking with ancestors.)
-The term axe (siginifies the life force, the invocation of both Afro-Brazilian & Catholic sprituality and certain semi-ritualized movements used in Capoeira Angola that bring "spiritual protection." -
-agogo ((Yoruba: agogo, meaning bell) is a single or multiple bell now used throughout the world but with origins in traditional Yoruba music and also in the samba baterias (percussion ensembles). The agogô may be the oldest samba instrument and was based on West African Yoruba single or double bells. The agogô has the highest pitch of any of the bateria instruments.
-reco-reco (notched wooden tube similar to aguiro)
Berimbaus preside over the roda, rhythmic combinations suggest variations in movement style between the two players in the roda. The roda begins and ends at the discretion of the lead berimbau player , determines who plays next, sets the tempo of music, and sends a calming effect onto the players to maintain peace within the roda. Music is the heart beat, and the full body Capoeira you can't have one without the other.
Capoeira Rules de Mestre Bimba:
1. Quit Smoking. It is prohibited to smoke during the training.
2. Quit drinking, alcohol is bad for your metabolism.
3. Avoid showing off your progress to your friends outside the roda. Remember, the element of surprise is the best ally in a fight.
4. Avoid conversation while training. You are paying with your time and by observing the other capoeiristas, you will learn more.
5. Always pratice the ginga.
6. Practice the fundamental exercises daily.
7. Do not be afraid to get close to your opponent. The closer you are the more you will learn.
8. Keep your body relaxed.
9. It is better to get beat up in the roda than on the streets.
The name "Cais Da Bahia" is a reference to the place where the first blacks from Africa landed. Africans brought with them customs and beliefs, which over time were planted and rooted in Braizilian soil, generating a strong tree. With the help and influence of other traditions bore the fruit of a culture with fashion. colors and flavors unrivaled.
Mestre Bimba 11/23/1899-/2/5/1974
Manoel dos Reis Machado, commonly called Mestre Bimba (Portuguese pronunciation: [ˈmɛstɾi ˈbĩbɐ]; a mestre (a master practitioner) of the African-Brazilian martial art of "capoeira".
Birth of the regional style
Capoeira regional is established
In 1936, Bimba challenged fighters of any martial art style to test his regional style. He had four matches, fighting against Vítor Benedito Lopes, José Custódio dos Santos ("Zé I") and Américo Ciência. Bimba won all matches.
On June 9, 1937, he earned the state board of education certificate. and officially registered the 1st Capoeira center. | <urn:uuid:57014430-7921-4b70-bcf1-8b79bf4cf73d> | CC-MAIN-2018-39 | http://caisdabahia.org/?redirect=false | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00355.warc.gz | en | 0.918692 | 815 | 2.875 | 3 |
World events have raised pressing questions of psychology as it is practiced all over the globe. The Handbook of International Psychology chronicles the discipline of psychology as it evolves in different regions, in the hope of reducing the isolated, parochial, and ethnocentric nature of the American profession. It surveys the history, methodology, education and training, and the future of psychology in nine distinct regions across six continents. They represent long histories in the field, such as the United States and the United Kingdom, emerging practices, such as Uganda, Korea and Spain, the lesser-known philosophies of China and histories marked by massive social change, as in Poland and Iran. The editors have carefully selected contributors, as well as an editorial board created especially for this project. Each chapter follows a uniform outline, unifying the volume as a whole, but allowing for the cultural diversity and status of psychology in each country. | <urn:uuid:72c7b507-1861-461f-9a83-684b63659649> | CC-MAIN-2017-39 | https://www.cheaptextbooks.org/9780415946124 | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00598.warc.gz | en | 0.940654 | 179 | 2.578125 | 3 |
Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
Part III: The Caspian Sea
8. Environmental policy-making for
sustainable development of the Caspian Sea area
9. Iranian perspectives on the Caspian Sea and Central Asia
8. Environmental policy-making for sustainable development of the Caspian Sea area
Morphometry and the principal hydrological features
The water balance and water-level variations
The economic impacts on the Caspian states of the water-level variations
Other development issues requiring international cooperation
Genady N. Golubev
The Caspian Sea is exceptional by many standards. It is the largest lake in the world. Moreover it is a closed lake with very large variations in its water level because of natural oscillations of the components that make up the water balance. The variations in the water level have had a strong influence on most aspects of economic life. This has been particularly so during the past few decades.
The largest river of Europe, the Volga, plays the principal role in the hydrological regime of the Sea. In addition to water, it also brings, as do other rivers that flow into the Caspian, a considerable amount of pollutants, which influence the aquatic ecosystems including the unique population of the few species of sturgeon. The Sea and its shores are rich with mineral resources, including oil, but prospecting and extraction also require effective environmental management.
The objective of this paper is to analyse the interrelation of the natural and socio-economic issues for the sake of regional sustainable development in a very special region of the world.
Morphometry and the principal hydrological features
The Caspian Sea is so large that it really deserves to be called a Sea. Its area is about 400,000 km2, it is 1,200 km long and 170-450 km wide, and its water volume is 80,000 km3. The total length of the shoreline is about 7,000 km. The average depth is 180 m and its maximum depth is 1,025 m. All these data are approximate, because they vary considerably depending on the water level of the Sea.
Morphologically, the Caspian Sea is divided into three main parts, which are more or less equal in area: a very shallow northern part with depths not exceeding 10 m, a middle part with an average depth of 170 m and a maximum depth of 790 m, and the deepest southern part, which has an average depth of 325 m and a maximum depth of 1,025 m (Avakian and Shirokov, 1994). The proportional volumes of the three parts are correspondingly 1/100, 1/3, and 2/3 of the total volume. The salinity of the Caspian Sea water ranges between 0.2 g/litre at the mouth of the Volga to 12-13 g/litre in the central and southern parts.
Because of its relatively small volume and depth, the northern part is the most vulnerable hydrologically and, hence, ecologically and economically. In addition, the shores of the northern part are as flat as its bottom and therefore the shoreline looks very insignificant. The shoreline is very variable, depending on both (a) the longer-term, climate-induced variations in water level of the Sea as a whole and (b) short-term, local wind action. A typical situation would be a rapid increase in sealevel, usually in the cold part of the year, as a result of strong winds, mostly from a southerly direction. In the most catastrophic cases the water level increases by 3.0-4.5 m and, owing to the flat topography, the Sea penetrates far inland, inundating strips 30-50 km wide for a few hundred kilometres along the coast. During the wind-driven waves of 11-13 November 1952 the inundation covered about 17,000 km2. In such cases the damage to settlements, roads, oil installations, etc. is very high.
The latest example of wind-driven catastrophic inundation was reported in the press as this chapter was being prepared. During 1216 March 1995 in Kalmykia, an Autonomous Republic of the Russian Federation situated on the north-western coast of the Caspian Sea, the water level increased up to 3 m. Over 200,000 hectares (2,000 km2) were inundated. Losses of human life were recorded (the exact figure was not given), and 520 houses (home to 3,200 people) were destroyed. About 150,000 sheep were lost.
The Caspian Sea is a closed water body. The main tributary, the Volga, is the largest river of Europe. Its watershed area is 1,360,000 km2 or about 40 per cent of the total for the Caspian, but it brings over 80 per cent of the total surface and underground flow to the Sea. From various points of view, the Volga plays a very important role in the state of the Caspian Sea with regard to its water balance, oscillations in its water level, and its chemical and biological make-up. Through these factors the Volga influences the socioeconomic development of the Sea and the adjacent territories.
The Volga River basin belongs completely to the Russian Federation. It contains about 40 per cent of Russia's population and is responsible for one-third of both the industrial and agricultural production of Russia. Psychologically, the river is viewed as "Mother Volga," the nation's main river. An integrated, sustainable environmental management for the Caspian Sea is impossible without a proper programme of action for the Volga basin. Such a programme would extend international cooperation on the Caspian deep inside Russia to Moscow.
The water balance and water-level variations
The data on the Sea's water balance vary considerably, depending on the time-period being considered and the incompleteness of knowledge. It is not the objective of this paper to go deeply into these issues. As an illustration, however, average data for 1900-1985 are shown in table 8.1. The mean annual deficit of the water balance 12 km3 - corresponds to the mean annual drop in water level of 3.1 cm. The average water level for the 1900-1985 period was - 27.35 m above sealevel (a.s.l.), or 27.35 m below the ocean level.
Most components of the water balance do not need explanation.
Table 8.1 The average water balance of the Caspian Sea, 1900-1985
|Precipitation on the Sea's surface||+74|
|Evaporation from the Sea's surface||-370|
|Outflow to the Bay of Kara-Bogaz-Gol||-14|
Source: adjusted data from Kosarev and Makarova (1988).
The Kara-Bogaz-Gol is a large bay situated on the eastern side of the Caspian Sea. Because of its elevation, there is a constant flow in one direction, from the Sea to the bay, with subsequent evaporation of water from the bay.
The variations in the components of the Caspian water balance are considerable; this leads to large changes in water level. The main factor in variations in the water balance is changes in river runoff, particularly that of the Volga.
During the twentieth century, the main periods of change in the Caspian Sea's water regime were as follows (Kuksa, 1994):
|1900-1929:||Relative stability of the water balance. The water level oscillated slightly around 26.2 m below sea level.|
|1930-1941:||A very large deficit in the water balance of 62 km3, mainly because of the decrease in river runoff (mostly that of the Volga). The water deficit led to a sharp drop in the water level of 1.8 m.|
|1942-1977:||A modest deficit in the water balance mainly because of a decrease in river runoff. During this period there was a drop in the water level of an additional 1.3 m.|
|1978-present:||A positive water balance. The water level has been increasing from its lowest point of -29.0 m in 1977. By 1994 it had risen to about -26.5 m, an increase in this period of 2.5 m.|
For not very clear seasons, researchers in the 1970s and earlier were under the impression that water withdrawals in the Caspian basin, mainly for irrigation and to fill the large, newly constructed water reservoirs, played a decisive role in variations in the water balance. In fact, variations of natural origin explain about 90 per cent of all variations (Golytsyn and Panin, 1989). Water withdrawals in the Sea basin amount to 40-50 km3/year, about half of which are from the Volga basin. Without human interference, the Sea's level might have been about 1.5 m higher than it is now (Kuksa, 1994).
The continuous, prolonged drop in the level of the Caspian caused a panic that reached its height in the 1970s. A number of long-term water-level projections were published, using different approaches to forecasting. Some were based on analysis of inflow to and evaporation from the Sea. They were not successful, because the behaviour of these factors is close to that of "white noise." Attempts were made to base projections on the index of solar radiation (the so-called Wolf's numbers), but they proved to be very contradictory. Forecasts based on indices of atmospheric circulation also provided unstable results. The only seemingly reasonable basis for projections was the forecast of water withdrawals, and this approach led to the conclusion that the level of the Caspian Sea would continue to fall (Shiklomanov, 1979). The common opinion that the level of the Caspian Sea would continue to drop had been strongly reinforced by the similar sharp drop in the level of the Aral Sea just a few hundred kilometres to the east of the Caspian Sea.
Very drastic and very costly measures were considered to maintain the level of the Caspian. Projects were proposed to bring large amounts of water from the north (e.g. Siberian rivers) to the south of the country (Golubev and Biswas, 1979, 1985). If they had been carried out, they would have had unforeseen and costly consequences.
In the 1980s, the situation changed completely. The Caspian water level continued to rise. Since all kinds of forecasts had indicated the continuation of a declining sealevel, this can serve as an example of a collective miscalculation by many very good water experts. The sealevel, however, has continued to grow in the 1990s, generating worries for the future, although actual problems of inundation and destruction as well as recent sealevel rises have had a major effect on the economy of locales around the Sea's shore.
The situation of oscillations in the level of the Caspian Sea is typical of closed lakes. It is typical not only from the hydrometeorological point of view, but from the point of view of economic impacts as well. The variations in sealevel cause uncertainty over time in economic activities. The interest groups involved, including governments, have to develop a long-term strategy for the management of the region. Thus, it is important to determine the expected upper and lower extremes with a reasonable probability of occurrence.
The history of variations in the level of the Caspian Sea (Klige, 1992) provides useful insights into this issue. During the period of instrumental observations (fig. 8.1) from 1837 on, the water level varied between -25 m and -29 m a.s.l., with an average of -27 m. From the sixth century B.C. to the present, the sealevel ranged from -20 m to - 34 m, a variation of 14 m (fig. 8.2). The average level, however, was the same: -27 m. During the Holocene (the past 10,000-11,000 years), the sealevel ranged from -9 m to -34 m (fig. 8.3), a variation of 25 m. The mean sealevel was -25 m a.s.l. (Note that the curves in these figures are not completely consistent, owing to differences of methodology and measurement.)
Fig. 8.1 Variations in the water level of the Caspian Sea according to instrumental observations, 1837-2000 (Source: Klige, 1992)
The rates of water-level change are also important. The typical rate for pronounced changes is about 150 cm per 10 years; this happened twice in the twentieth century. Over longer periods of time, the typical figure for sharp changes is about 10 m over a period of 1,000 years. If one is to believe the data in figure 8.3, the extreme rate of change is about 14 m over 300 years; or a 14 m increase and then a 14 m drop over 700-800 years. Thus, sharp variations in the level of the Caspian Sea are the most characteristic feature of its regime at time-scales of tens and hundreds of years. Economic development strategies must take this into consideration.
The unsuccessful experiences with forecasting Caspian Sea behaviour indicate that, given the present-day level of scientific understanding, reliable forecasts cannot be expected. One has to plan on the basis of expectations of quasi-cyclical oscillations in sealevel, as has happened in the past. Most researchers believe that in the next decade the sealevel will reach -25 m. In the longer term, variations in sealevel are expected to range between -20 m and -29 m.
Fig. 8.2 Variations in the water level of the Caspian Sea during historic time (sixth century B.C. to the present) (Source: Klige, 1992)
Fig. 8.3 Variations in the water level of the Caspian Sea during the Holocene (Source: Klige, 1992)
During the prolonged drop in sealevel between 1930 and 1977, when it was believed that the trend would continue, economic planning considered the low sealevel. New settlements or roads, ports, oil installations, and so forth were built on the assumption of a sealevel of -28 m. Now, however, with the sealevel approaching -26 m, economic damage in each of the riparian countries has been enormous.
Owing to the relatively rapid rise in sealevel, the Caspian coastline is currently in a state of transition. In general, the change from the retreating phase of the Caspian to the advancing phase has led to a transition from predominantly accumulating processes along the shore to a prevalence of abrasion processes. On formerly accumulating shores, erosion processes have begun and continue in many places. In quite a number of areas erosion has been catastrophic. Cliffs used to be separated from the Sea by a wide beach. Now, the cliffs are subjected to wave action, and the eroded soils have accumulated on the former beaches. Many houses, apartment buildings, hotels, and other structures constructed in the 1930s to 1970s close to the cliffs are now in danger or are in the process of being destroyed. Experience has shown that construction of any kind, except ports, should be at levels above -23 m.
The economic impacts on the Caspian states of the water-level variations
The present situation
On the flat territory of the northern and north-western coast, which belongs to the Russian Federation, even small increments in the water level mean large losses of land. If the water level reaches -25 m, 16,500 km2 will be lost, of which 10,000 km2 would be inundated and 6,500 km2 waterlogged. This land has oil and gas wells, roads, irrigated and other arable land, etc. At -25 m, 114 human settlements would be inundated, with a total population of 100,000. The frequency and magnitude of the floods caused by wind action will increase. The current strategy in the Russian part of the Caspian coast is to plan for a water level between - 26 and -25 m, keeping in mind wind-caused floods up to -23 to -22 m. Construction of a protecting dike with a road along the top is envisaged for most of the north-western coast. In addition, special engineering action is foreseen to protect certain towns and the railway going north-south along the coast. This railway is the only one leading from the centre of the country to the south that does not cross the zone of the recent military conflict and political instability in the northern Caucasus.
Information on damage to the territories of the riparian countries other than Russia is scanty. The north-eastern shoreline belonging to Kazakhstan is also extremely flat. Wind-driven waves cause floods, which are the biggest nuisance. The height of these floods reaches 2.32.8 m, with inundation inland up to 30-40 km (Kuksa, 1994). During the last quarter of the twentieth century there have been 10 floods like this. During wind-induced flooding, behind the flooding wave (that is, towards the Sea) an area of low sealevel is formed, up to 3 m below the average within a band 10-15 km wide. Western Kazakhstan is rich in oil and gas resources. A sealevel rise and the associated increase in the frequency of wind-wave inundations are very serious obstacles to further development of the oil and gas industry.
In Turkmenistan the increase in sealevel has created some problems as well. The most serious situation is around the town of Cheleken, situated on the peninsula of the same name. During the days of relatively high sealevels before 1930, Cheleken was an island. Then, with the drop in sealevel, it became a peninsula. Now, it is turning once again into an island. The dike that protects the town has been destroyed by waves and dozens of apartment buildings are under water, along with two adjacent settlements. Oil and gas pipelines, the main road leading inland, and port installations have been damaged; drilling rigs and power supply lines are surrounded by water. Sewage treatment facilities in the area and, hence, the ecology of the Sea are endangered. In some places sea water has penetrated inland by 40 km (Kuksa, 1994).
A unique feature on the eastern shore of the Caspian Sea is the Bay of Kara-Bogaz-Gol, which belongs to Turkmenistan. In 1980 the area of the Bay was 9,500 km2. The water level in Kara-Bogaz-Gol is a few metres below that of the Sea, and there is a constant flux of water into the bay. At the beginning of the twentieth century, when the water level was about -26 m, the flux to the bay was about 20 km3 a year. The bay served as a large evaporation pan. Water evaporated in the bay, leaving a brine that was very rich in valuable chemical elements and salts. By 1980, the brine contained 270-290 g of salt per litre. The total volume of the brine was 20-22 km3 and its average depth was 2.1 m. The total amount of dissolved salts was 6 billion metric tons (Bortnik, 1991), supporting a productive chemical industry.
In 1977-1978, however, with the water level close to -29 m, the discharge of water to the bay was only 5-7 km3. To slow down the drop in the level of the Caspian Sea, a decision was made in 1978 to cut off Kara-Bogaz-Gol from the rest of the Caspian. This was accomplished by March 1980, after the sealevel had already begun to increase. The bay stayed completely cut off from the Sea for four and a half years, during which about 50 km3 of Caspian water had been saved. This corresponded to a 12-14 cm rise in the level of the entire Sea. However, by that time it was no longer needed. By the first half of 1984 the valuable brine had dried up at the surface of the bay and much of it had crystallized and settled on the bay's bottom. A viable chemical industry had died. It was then decided to restore the connection between the bay and the Sea. Now, a new, much smaller brine basin is being formed inside the bay close to the strait. The current status of the chemical industry is not known. The problem, which had been created by the Soviet Union, is now in the hands of the new state of Turkmenistan.
In Iran, the impacts on its flat coastal landscape have also been considerable. Protecting barriers of 8.5 km have been built, and an additional 27 km are needed (Mojtahed-Zadeh, chap. 9 in this volume).
In Azerbaijan, the Lenkoran Lowland is a continuation of the lowlands of Iran. In the town of Lenkoran at least 500 houses have been destroyed and 800 hectares of fertile land have been lost. The protected nature area of Kizil-Agach, a wetland convenient for wintering a great variety of migratory birds, is now almost completely under water.
The need for international cooperation
This brief review of the damage associated with the rise in the level of the Caspian Sea brings us to a very important conclusion: stabilization of the level of the Caspian Sea is in the interests of all countries surrounding the sea. This might provide a basis for international cooperation with regard to a lot of give-and-take issues. Obviously, a total or partial stabilization of the sealevel is beyond human means, but some modest degree of control is possible, as the Kara-Bogaz-Gol experience has demonstrated. Another possibility would be to use the flat territories of the north-eastern Caspian as evaporation pans; they had in fact been working that way before the sealevel dropped in the 1930s.
Theoretically, it is also possible to control the sealevel by regulating water consumption in the basin, mainly in the Volga River basin. However, this would involve a very complex political problem: the Volga and its basin belong to one country, the Russian Federation, while the Caspian Sea belongs to five. Moreover, the portion of the shoreline belonging to Russia is modest. Management of an international lake (or sea) by means of action in a large but national river would not be a trivial diplomatic issue.
Another option would be large water transfers from neighbouring northern basins. About 10 years ago such proposals were sharply (and justly) criticized by the environmental movement. Neither the present political climate nor current levels of science and technology are yet good enough to reconsider such projects.
Developing a common strategy for sustainable economic activity on the Caspian Sea (and its shores) under conditions of drastic changes in the sealevel is a very good subject for negotiation and cooperation. It is not, however, a trivial subject; international cooperation is not just desirable but absolutely necessary.
Other development issues requiring international cooperation
Other important development issues for the Caspian Sea require international cooperation. Two are briefly mentioned here: the management of marine biological resources and the management of mineral resources in the seabed, primarily oil and gas.
Marine biological resources
The northern part of the Caspian Sea is of very high biological productivity. Primary biological production amounts to 23 million metric tons a year. In addition, the rivers (primarily the Volga) carry about 20 million tons of organic matter a year from the basin (Katunin et al., 1990, cited in Kuksa, 1994). Therefore, the importance of the Caspian Sea for fisheries is high. During 1976-1981, the average annual fish catch, mostly from the northern Caspian, was about 400,000 tons. The Caspian is a unique body of water containing about 90 per cent of the world population of sturgeon species. Unfortunately, the share of sturgeon in the total catch is declining, being about half what it was during the first decade of the twentieth century. The main causes are the construction of dams on rivers, which cut off the main spawning grounds, increased water pollution, and the reduction of streamflow due to withdrawals for irrigation. Sustainable maintenance of the unique Caspian ecosystem is clearly one of the priority actions to be pursued through cooperation by all five Caspian nations.
One of the very first oil fields to be exploited is around Baku, the largest Caspian city and the capital of Azerbaijan. Today, oil and gas fields are everywhere along the shores of the Sea. The fields extend into the Sea and there is considerable experience, mainly close to Baku, in extracting oil from the Sea's bottom.
After the collapse of the Soviet Union a problem emerged of how to use natural resources from the bottom of a large international lake (or sea). No less serious a problem is the proper environmental management of the Sea in the course of oil and gas prospecting and extraction from the seabed. One of the primary legal issues is to define what the Caspian is - a sea or a lake, because they can be legally treated differently, depending on the definition.
The list of issues related to the sustainable development of the Caspian Sea and its coastline addressed here has not been exhaustive. A first step toward the international cooperation process should be to define the priorities of interest of each of the riparian Caspian countries.
Nature must be respected. This is particularly true of the Caspian Sea region. It is a special case of closely integrated natural, political, environmental, social, and economic issues. It is in the interests of all branches of the economy to learn how to move on along the road of sustainable development, given the very large variations in the sealevel. This will be impossible, however, without effective international cooperation. Broadly speaking, effective management of the Caspian Sea and its resources cannot be achieved without concerted action by all five riparian countries. Only a holistic approach at the international level can make economic development of the region truly sustainable.
The author expresses his very deep appreciation to Prof. Rudolf K. Klige for providing the graphs of the variations in the level of the Caspian Sea (figs. 8.1, 8.2, and 8.3).
Avakian, A. B. and V. M. Shirokov. 1994. Rational Use and Protection of Water Resources. Ekaterinburg: Publ. House "Victor" (in Russian).
Bortnik, V. N. 1991. "The water balance of the Bay of Kara-Bogaz-Gol under natural and controlled conditions." Trudy GOIN, no. 183, pp. 3-18 (in Russian).
Golubev, G. N. and A. K. Biswas (eds.). 1979. Interregional Water Transfer: Projects and Problems. Oxford: Pergamon Press.
_____1985. Large-Scale Water Transfers: Emerging Environmental and Social Experiences. Oxford: Tycooly Publishing, for UNEP.
Golytsyn, G. S. and G. N. Panin. 1989. "Once more on the water level changes of the Caspian Sea." Vestnik Akademii Nauk SSSR, no. 9, pp. 59-63 (in Russian).
Katunin, D. N., A. G. Ardabieva, L. N. Dubovskaya, and N. V. Ivanova. 1990. "Primary productivity processes in northern Caspian under anthropogenic impact." Paper presented at the 8th All-Union Conference on Applied Oceanology, Leningrad, 15-19 October (in Russian).
Klige, R. K. 1992. "Changes in the water regime of the Caspian Sea." GeoJournal, July, pp. 299-307.
Kosarev, A. N. and R. A. Makarova. 1988. "On the changes in the Caspian Sea water level and the possibility of forecasting it." Vestnik Mosk. Universiteta, Geographia, no. 1, pp. 21-26 (in Russian).
Kuksa, V. I. 1994. Southern Seas (Aral, Caspian, Azov and Black) under Anthropogenic Stress. St. Petersburg: Hydrometeoizdat (in Russian).
Shiklomanov, I. A. 1979. Anthropogenic Changes in River Run-Off. Leningrad: Hydrometeoizdat (in Russian).
9. Iranian perspectives on the Caspian Sea and Central Asia
Iran's northern geopolitical interests
The issue of lake Hamun and the Hirmand River
The decade of the 1990s began with tremendous changes in the global political system. These profound changes prepared the framework for an entirely new set of geopolitical circumstances for the twenty-first century. From the point of view of political geography, 1991 was an outstanding year, in the sense that it was the year during which two major events occurred that highlighted the rapid rate of change in the global system. The first was the Kuwait crisis, which triggered an almost universal reaction. This, in turn, gave birth to the concept of "international community" to replace the term "free world" in the dying days of the communist bloc. The second event was the collapse of the geostrategic structure of the Warsaw Pact, which not only destroyed former communist states such as the former Soviet Union, Yugoslavia, and Czechoslovakia, but also brought down the bipolar system that had evolved in the wake of World War II. These developments accelerated the speed of the globalization of the interests and aspirations of many nations. This further intensified political and economic competition worldwide.
These political equivalents of a global earthquake shook the global political system, with staggering regional results, especially for the area of our particular concern extending from central Europe to the Pacific Ocean, and in the area known as the Middle East. Regional issues tend to dominate an individual nation's foreign policy considerations and regional interests. This, in turn, is the basis on which the globalization of interests has been gradually developing.
The new geopolitical realities have fundamentally changed the balance of forces in the international community. Global thinkers proposed visions of what they perceived could be a New World Order: (a) a unipolar system with the United States at the top of the pyramid of the global structure playing the role of the "global gendarme," (b) a clash of civilizations, and (c) the beginning of a multipolar economically oriented global system (Mojtahed-Zadeh, 1992). The recent demise of the ideologically oriented bipolar world is evidence of the changing geopolitical structure.
The end of the Cold War was marked by an unprecedented intensification of economic competition among North America, Western Europe, and Pacific Rim countries. The economic successes of the European Union encouraged other economic powers to form regional economic groupings of their own. For example, the United States joined with Canada and Mexico to create the North American Free Trade Agreement. Countries in South-East Asia had already formed the Association of South-East Asian Nations.
The emergence of these regional economic groupings as giants presents a picture of how the changing world order is shaping up on the brink of the twenty-first century. Although the "paper" successor to the Soviet Union, the Commonwealth of Independent States (CIS), with both Slavic and Islamic members, may not survive in its present form, the possibility exists that increased rivalries with, as well as encouragement from, other geostrategic regions will result in the formation of a more realistic grouping between Russia and, for example, some of the nations of Eastern Europe. However, today most East European nations strive to join NATO and the European Union. In Asia, China's expanding economy, together with its reunification with Hong Kong in 1997 and a wider economic grouping with other countries in the region, will result in the formation of yet another regional economic giant.
Other regional economic arrangements will be the subject of change and modification in terms of goals, structure, and geographical scope. The Economic Cooperation Organization (ECO) is one such arrangement. This grouping includes Iran, Turkey, Pakistan, Azerbaijan, Kazakhstan, Turkmenistan, Uzbekistan, Tajikistan, Kyrgyzstan, and Afghanistan. As a regional organization, it has never functioned seriously and needs fundamental changes in terms of its structural shape and its regional and global aspirations before being able to function in the new geopolitical environment. A news report in 1995 noted that "ECO officials boast of the region's potential, 300 million people with rich natural resources. But it will be a huge task to make it anything like a real common market" (The Economist, 2 December 1991, p. 42).
In sum, with the demise of communism, ideological rivalries in the global system have been increasingly replaced by economic competition. What was once described as the capitalist economy has become the prevailing global economic system. Increased global exchange to be further boosted by the World Trade Organization as a successor to the General Agreement on Tariffs and Trade - has undermined many aspects of the economic sovereignty of nation-states.
In the emerging global political system Iran is uniquely situated as a land-bridge connecting two very important regions - the Caspian Central Asia region and the Persian Gulf region. This geopolitical position has had an immense influence on Iran's global and regional policies as well as on the policies of other powers toward these two regions. Iranian policy makers, however, do not appear to have formulated, as yet, a clearly defined strategy for maximizing the influence of Iran's unique geographical position between two of the most important areas of energy deposits on earth. Iran's evolving strategies, still somewhat vague, have not yet brought home to the international community the message that Iran's territory is geographically and economically the most logical and most sensible route to pipe oil and gas from the Caspian and Central Asian regions to the high seas by way of the Persian Gulf and the Gulf of Oman. This is especially true if one considers the export of oil to Japan and to other major oil consumers in the Far East. Full realization of this position is bound to lead to a substantial modification of Iran's political outlook as well as the modification of the reactions of others in response to Iranian policies.
Iran's new geostrategic position has led it to identify two major regions of direct interest: one to the north and one to the south. This paper presents an overview of Iran's northern geopolitical interests in the Caucasus, the Caspian Sea region, and Central Asia. It also includes a brief discussion of Iran's eastern hydropolitics: the case of Lake Hamun and the Hirmand River on the southern edge of Central Asia.
Contents - Previous - Next | <urn:uuid:ec3fe172-7a3a-4e30-8578-c3d340c46699> | CC-MAIN-2015-11 | http://archive.unu.edu/unupress/unupbooks/uu18ce/uu18ce08.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463287.91/warc/CC-MAIN-20150226074103-00161-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.952693 | 7,283 | 2.8125 | 3 |
Methadone was developed in Germany in the 1930s in an attempt to find a reliable source of opiates.
It must be one of the most widely researched pharmaceutical products, but it remains controversial.
Patients will often be wary of methadone because it is addictive, saying that they 'don't want to develop another habit', but it is really just a long-acting form of heroin.
As a substitute for heroin, methadone has some major advantages.
First, methadone only has to be taken once a day; its half-life is 24 hours.
Second, it is orally active. Most oral opiates have a high first-pass metabolism in the liver and relatively little is absorbed into the bloodstream. For example, 40-90% of an oral dose of methadone reaches the bloodstream, compared to 20-40% of oral morphine. This means patients do not have to smoke or inject heroin to achieve high blood levels, with the attendant health risks, such as chronic lung disease, blood-borne viruses, DVT or bacterial endocarditis.
Third, methadone is clean - it is a pharmaceutical drug. Street heroin is known as 'brown' because of the adulterants that are mixed with it, whereas pure diamorphine is white. Recently, cases of tetanus and anthrax have been seen in heroin injectors, as a result of adulteration.
Most heroin-dependent patients want to stop using because it has taken over their lives.
They have to use three or four times a day. They have to find the money to buy drugs, perhaps by shoplifting or sex work. If they attempt to stop using heroin ('going cold turkey'), they will experience symptoms such as bone and muscle pain, stomach cramps, diarrhoea, chills and sweats, and palpitations, their eyes and nose will run and they will vomit.
Risks of heroin include luncg disease and DVT
Taking a drug history
Substituting methadone for heroin allows patients to make changes to their lifestyle. However, methadone is an opiate and needs to be prescribed with care. The Orange Book1 provides clear advice on prescribing for substance misusers.
Before methadone is prescribed, the patient needs to be carefully assessed. A drug history is important, to find out what and how they are using, how often, how much, for how long and what happens if they do not use.
If patients are addicted to a drug, they will take it daily with increasing tolerance and experience withdrawal symptoms if they stop. The drug history needs to include all drugs, illicit and legal, and alcohol. A more general history, such as past treatments, blood-borne virus screening and whether the patient is pregnant, is also important.
Examination, such as looking at injection sites, may also be appropriate.
On-site testing for illicit drugs before prescribing is critical. This could include all drugs that may be abused, but must include a test specific for heroin and methadone (and buprenorphine if appropriate).
The test could be an oral fluid swab or a urine test, and should be done as close to the time of prescribing as possible (I would suggest within 24 hours of issuing a prescription). Confirmatory testing by a pathology laboratory should also be undertaken.
If the patient is to be prescribed methadone or buprenorphine, their test specimens should consistently show heroin.
The RCGP has produced guidance about prescribing methadone and buprenorphine.2 If in doubt, seek advice from an experienced practitioner or the local drug and alcohol team. Do not prescribe in haste.
Patients do not die from opiate withdrawal, but they do die from an overdose of methadone.
Start on a low dose and increase it slowly. The Orange Book advises that the starting dose of methadone should be 30mg or less, especially if the patient has low tolerance. The dose can then be increased by up to a maximum of 30mg in a week.
However, methadone has a long half-life and a steady plasma level takes five days to achieve. If you increase the dose more frequently, the methadone may accumulate and result in an overdose.
The optimal dose of methadone is 60-120mg, preferably above 90mg. This cannot be achieved quickly.
Only prescribe oral methadone mixture, 1mg per 1ml. There are stronger concentrations, but these are more likely to cause confusion in prescribing and dispensing, and carry a much greater risk if passed on to another person by the patient.
All patients should be started on daily supervised consumption. This is not punitive, but ensures the patient takes the correct dose every day.
This allows the patient to establish a different pattern of behaviour and the pharmacist can also decline to dispense the methadone if the patient is intoxicated on other drugs.
Patients can be taken off supervised consumption when they are stable on their methadone and not using other drugs or alcohol. This will require regular drug screening to show they are taking the methadone and not taking other drugs.
If the patient relapses into using other drugs or alcohol, they can be returned to supervised consumption to ensure their safety.
Change in lifestyle
Methadone is only a pharmaceutical product. Patients also need to be motivated to change their lifestyle. This takes a lot of work and may mean losing drug-using 'friends', moving house, finding a job and making contact with family. This requires appropriate counselling, encouraging the patient to change.
One study of drug users in Edinburgh found that some continued to use heroin a couple of times a week for many years, despite starting on methadone.3 The methadone keeps the patient well, but addiction to heroin is a very hard habit to break.
The division between methadone maintenance and detoxification is therefore a false boundary.
The aim is to stop the patient using illicit drugs which cause harm.
By breaking the cycle of drug misuse, the patient is enabled to change their lifestyle.
If patients continue to use other drugs while on methadone, detoxifying from the methadone is likely to mean they will have to continue using heroin.
If you prescribe a higher dose of methadone, it is more likely they will stop using other drugs 'on top'. The patient can then decide if and when to stop the methadone. Likewise, detoxification does not mean recovery, because premature detoxification may lead to relapse.
The National Treatment Agency for Substance Misuse states: 'Recovery is a broader and more complex journey that incorporates overcoming dependence, reducing risk-taking behaviour and offending, improving health, functioning as a productive member of society and becoming personally fulfilled.'4
- Dr Young is a GP and London regional lead for the RCGP Substance Misuse and Associated Health Unit
- The RCGP (www.rcgp.org.uk) runs a course in the management of drug misuse in primary care
1. DH. Drug Misuse and Dependence: UK Guidelines on Clinical Management. London, DH, 2007.
2. RCGP. Guidance for the use of substitute prescribing in the treatment of opioid dependence in primary care. London, RCGP, 2011.
3. Kimber J, Copeland L, Hickman M et al. Survival and cessation in injecting drug users: prospective observational study of outcomes and effect of opiate substitution treatment. BMJ 2010; 341: c3172.
4. NHS. National Treatment Agency for Substance Misuse. Medications in Recovery. July 2012. www.nta.nhs.uk/publications.aspx | <urn:uuid:b58235c9-32af-4784-8dc5-7e550f622841> | CC-MAIN-2020-34 | https://www.gponline.com/methadones-role-addiction/substance-misuse/substance-misuse/article/1166413 | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00566.warc.gz | en | 0.930827 | 1,588 | 3.078125 | 3 |
According to the Bible, God said to Moses, on whom be peace: I will raise up for them a prophet like you from among their brothers; I will put my words in his mouth, and he will tell them everything I command him. (The Holy Bible, New International Version, Deuteronomy chapter 18, verse 18).
The prophet described in the above verse must have the following three characteristics: 1. He will be like Moses. 2. He will come from the brothers of the Israelites, i.e. the Ishmaelites. 3. God will put His words in the mouth of the prophet and he will declare what God commanded him.
Let us see which prophet God was speaking of. 1. The prophet like Moses Some people feel that this prophecy refers to the prophet Jesus, on whom be peace. But, although Jesus (peace be upon him and all of God’s prophets and messengers) was truly a prophet of God, he is not the prophet spoken of here. He was born miraculously, and finally God raised him up miraculously. On the other hand, Muhammad is more like Moses; both were born in a natural way and both died natural deaths. 2. From among the Ishmaelites Abraham had two sons, Ishmael and Isaac (Genesis, chapter 21). Ishmael became the grandfather of the Arab nation. And Isaac became the grandfather of Jewish nation. The prophet spoken of was to come not from among the Jews themselves, but from among their brothers, the Ishmaelites. Muhammad a descendant of Ishmael, is indeed that prophet. 3. God will put his words in his mouth ‘Neither the content of the revelation, nor its form, were of Muhammad’s devising.
Both were given by the angel, and Muhammad’s task was only to repeat what he heard.’ (Word Religions from Ancient history to the Present, by Geoffrey Parrinder, p. 472).
God sent the angel Gabriel to teach Muhammad the exact words that he should repeat to the people. The words are therefore not his own; they did not come from his own thoughts, but were put into his mouth by the angel. These are written down in the Qur’an word for word, exactly as they came from God. Now that we know that prophet we must listen to him, for, according to the Bible, God says: "I will punish anyone who refuses to obey him’ (Good News Bible, Deut. 18:19).
Jesus (on whom be peace) In the Glorious Qur'an The Qur’an tells us many wonderful things about Jesus. As a result, believers in the Qur’an love Jesus, honor him and believe in him.
n fact, >no Muslim can be a Muslim unless he or she believes in Jesus, on whom be peace. The Qur’an says that Jesus was born of a virgin, that he spoke while he was still only a baby, that he healed the blind and the leper by God's leave and that he raised the dead by God's leave. What then is the significance of these miracles? First, the virgin birth.
God demonstrates His power to create in every way. God created everyone we know from a man and a woman. But how about Adam, on whom be peace? God created him from neither a man nor a woman. And Eve from only a man, without a woman.
And finally, to complete the picture, God created Jesus from a woman, without a man. What about the other miracles? These were to show that Jesus was not acting on his own behalf, but that he was backed by God. The Qur’an specifies that these miracles were performed by God's leave.
This may be compared to the Book of Acts in the Bible, chapter 2, verse 22, where it says that the miracles were done by God to show that he approved of Jesus. Also, note that Jesus himself is recorded in the Gospel of John to have said: ‘I can do nothing of my own authority' (5:30). The miracles, therefore, were done not by his own authority, but by God's authority. What did Jesus teach? The Qur'an tells us that Jesus came to teach the same basic message which was taught by previous prophets from God – that we must shun every false god and worship only the One True God.
Jesus taught that he is the servant and messenger of the One True God, the God of Abraham. These Qur'anic teachings can be compared with the Bible (Mark 10:18; Matthew 26:39; John 14:28, 17:3, and 20:17) where Jesus teaches that the one he worshipped is the only true God. See also Matthew 12:18; Acts 3:13, and 4:27 where we find that his disciples knew him as ‘Servant of God’. The Qur’an tells us that some of the Israelites rejected Jesus, and conspired to kill him, but God rescued Jesus and raised him to Himself. God will cause Jesus to descend again, at which time Jesus will confirm his true teachings and everyone will believe in him as he is and as the Qur'an teaches about him. Jesus is the Messiah. He is a word from God, and a spirit from Him.
He is honored in this world and in the hereafter,and he is one of those brought nearest to God. Jesus was a man who spoke the truth which he heard from God. This can be compared with the Gospel According John where Jesus says to the Israelites: ‘You are determined to kill me, a man who has told you the truth that I heard from God’ (John 8:40).Tweet | <urn:uuid:77f51727-21d6-4646-9358-26b0a114809f> | CC-MAIN-2018-34 | http://islam-port.com/ishamsarticles/what-the-bible-says-about-muhammad/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00166.warc.gz | en | 0.983892 | 1,190 | 2.71875 | 3 |
Objective: To help children explore their imagination and creativity, while learning about space and the different elements involved in space exploration.
Age Group: 5 to 8 years old
Materials Needed: Space-themed props (such as helmets, cardboard cutouts of spaceships, planets, etc.), costumes, a space-themed soundtrack (optional).
Warm-Up Activity: Space Walk
- Have the children stand in a circle, and explain that they are going on a space walk to explore the galaxy.
- Begin walking around the circle with a slow, steady pace, and have the children follow you.
- After a few minutes, start introducing different movements, such as walking backwards, tiptoeing, jumping, or spinning.
- Encourage the children to come up with their own movements, and have them lead the group.
- Slowly increase the pace, until the children are “zooming” through space.
Mime and Movement: The Launch
- Divide the children into groups, and explain that they are going to act out the launch of a spaceship.
- Provide the children with cardboard cutouts of a spaceship and other space-themed props, and encourage them to use mime and movement to simulate the launch process.
- Ask the children to work together to come up with different movements and sounds that represent the different stages of the launch, such as countdown, liftoff, and acceleration.
- Once each group has had a chance to practice, have them perform their launch sequence for the rest of the group.
Improvisation: Alien Encounter
- Explain to the children that they have landed on a strange planet and encountered an alien creature.
- Assign each child a role, either as an astronaut or as the alien, and encourage them to use improvisation to interact with one another.
- Encourage the children to use movement, gesture, and voice to create their characters and the scene.
- As the scene progresses, encourage the children to add more details and dialogue to their improvisation, as they discover more about the alien and its world.
Role play: Mission Control
- Explain to the children that they are going to act out a communication between the spaceship and Mission Control on Earth.
- Provide the children with props such as walkie-talkies, headsets, or toy telephones to represent the communication devices.
- Assign one child as the spaceship captain and another as the Mission Control operator.
- Encourage the children to use talking objects to communicate with each other, such as speaking into the walkie-talkies or using hand gestures to indicate different commands.
- Encourage the children to switch roles and try different communication devices, to explore the different ways that communication can be used in space exploration.
Still Images and Thought Tracking: Spacewalk
- Explain to the children that they are going to act out a spacewalk, where they will explore the surface of a planet or asteroid.
- Have the children work in pairs, and encourage them to use still images to create different poses and movements that represent the spacewalk.
- After a few minutes, ask the children to freeze in their current pose, and have them silently think about what their character is feeling and thinking in that moment.
- Encourage the children to share their thoughts and feelings with their partner, and to use thought tracking to add more detail and depth to their character.
Soundscape: The Return Home
- Explain to the children that they are going to act out the return journey home, where they will encounter different sounds and obstacles along the way.
- Provide the children with different sound-making props, such as rattles, drums, or bells.
- Encourage the children to create a soundscape that represents the different stages of the return journey, such as the re-entry into Earth’s atmosphere, turbulence during the descent, and the landing on the ground.
- As the soundscape progresses, encourage the children to add more details and variations, such as different rhythms and volume levels.
- After the soundscape is complete, have the children share their experiences and reflections on their space adventure. | <urn:uuid:58722cff-cec7-4d52-8d97-d1172e428e4a> | CC-MAIN-2023-50 | https://dramastartbooks.com/2023/04/23/space-adventure-a-drama-workshop-for-children-ages-5-to-8/?noamp=mobile | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00827.warc.gz | en | 0.936987 | 856 | 4.28125 | 4 |
If you're planning to share confidential information with a third-party, you must do it in a way that ensures they respect the sensitive intelligence you've made them privy to. One of the key protective measures of achieving the necessary secrecy is through a non-disclosure agreement (NDA).
For those unfamiliar with NDAs, we've created this detailed guide on what you need to know – from the different types of non-disclosure agreements and the key elements to include, to what happens when someone breaches an agreement.
What is a non-disclosure agreement?
To maintain a competitive advantage, businesses must keep their projects, ideas and intellectual property under wraps. Likewise, start-up companies with new, profitable ideas can only succeed if they’re kept a secret.
A non-disclosure agreement is a document in which a person or business asks the subject of the NDA (which is another person or business) not to share confidential information shared with them. They can also be called:
Whatever it may be called, an NDA is still a written document where one or both parties agree to keep specific information confidential.
How do non-disclosure agreements work?
NDAs are used when a business encloses confidential information to potential investors, creditors, clients or suppliers. Doing so puts the confidentiality in writing, while each party's signature ensures trust, and helps to deter theft of intellectual property. The exact nature of the confidential information will be detailed in the NDA itself, but some typical situations where NDAs are used include:
Some NDAs bind a person or business to secrecy for an indefinite period, ensuring the signer will, at no point, divulge the confidential information contained within. Without such agreements, information can be freely made public – whether accidentally or maliciously.
Any penalties for breaking an NDA are laid out in the agreement and may include damages in the form of lost profits or criminal charges.
What are the different types of non-disclosure agreements?
There are two main types of non-disclosure agreements: unilateral and mutual.
A contract that stipulates one party to the agreement; the majority of NDAs fall under this category. Though such agreements aim to protect a business' trade secrets, they may also protect the copyright for information created as a result of an employee's research.
Here's an example: researchers in the private sector and professors at research universities are often required to sign NDAs that give the rights to any research they conduct to the business or university that supports them.
This type of NDA is executed between businesses engaged in a joint venture that involves sharing proprietary information. So, if one manufacturer knows of a new piece of technology being used in a certain product, they'll be required to keep such knowledge a secret.
What are the key elements of a non-disclosure agreement?
You may think that NDAs are sprawling documents filled with legalese, but actually, you only need a few pages to cover the relevant points. Typically, the key elements of a non-disclosure agreement include the following:
As a disclosing party, you want this definition to be as broad as possible to make sure the other side doesn't find a loophole and start revealing valuable secrets.
On the other hand, if you are the recipient of the information, you want to make sure the information you're supposed to keep secret is clearly identified, so you know what you can and can't use.
Like what you're reading? Sign up below to receive our best content each month...
If you agree to a term, then what is considered a reasonable amount of time? It's largely dependent on the industry and the type of information that's conveyed. It may be the case that a few years will suffice since tech changes so rapidly, it may end up rendering the information outdated anyway.
However long the terms are, the NDA needs to say that, even if the term has ended, the disclosing party isn't giving up any other rights that it may have under copyright, patent or other intellectual property laws.
What happens if someone breaches a non-disclosure agreement?
Whatever the specifics are in the case that an NDA is breached, there are a few things you can do in the aftermath:
Review the original document
Usually, the terms of what happens in the event of a breach are written into the NDA itself.
Investigate the theft or breach
If the information got out, you'll need hard evidence showing how it happened, which is a crucial step. If you can't prove your case, then you'll be responsible for the legal fees incurred by both parties under the terms laid out in the NDA.
When collecting information, look for how the secret got out, how the confidential knowledge has been used, and the potential economic value of the information. This can be difficult, but if you think you're eligible for damages then it must be done.
Determine what legal claim needs to be made
In almost all cases of a breach, you'll be able to pursue damages. Other legal recourses may include misappropriation of trade secrets, copyright infringement, breach of fiduciary duty (not acting in the interest of the other party), and patent infringement.
Gazprom Energy is a leading supplier of energy for small businesses, offering competitive gas and electricity contracts that are simple to set up and manage. For more information, visit the homepage or call our team today on 0161 837 3395.
The views, opinions and positions expressed within this article are those of our third-party content providers alone and do not represent those of Gazprom Energy. The accuracy, completeness and validity of any statements made within this article are not guaranteed. Gazprom Energy accepts no liability for any errors, omissions or representations.
A guide to choosing the right energy tariff for your business
Does your business need a corporate sustainability programme?
How to choose the right supply vendor for your business | <urn:uuid:f8b7b7a8-6c3d-4452-80c9-dc83c3dc438f> | CC-MAIN-2021-43 | https://www.gazprom-energy.co.uk/blog/everything-small-business-owners-need-to-know-about-non-disclosure-agreements/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00417.warc.gz | en | 0.93741 | 1,221 | 3.28125 | 3 |
Jackie intends to major in finance and find employment in corporate financial management. As a finance major, Jackie probably will be required to take several courses in _______.
Akiko is a financial manager. Her job is likely to include which of the following activities?
|A)||preparing a cash flow analysis|
|B)||preparing financial statements|
|C)||preparing a balance sheet|
|D)||preparing a marketing budget|
Which of the following items would be part of a capital budget?
|D)||fleet of new trucks|
The managers of Seattle Clothing Company regularly compare their actual profits with the firm's projected profits. When deviations occur, the managers take corrective action where necessary. The management of Seattle Clothing is exercising _______.
Cleveland Wholesale Company wants to improve cash flow from accounts receivable collections. Which of the following strategies would most likely help the company achieve this objective?
|A)||relaxing its credit policy for new customers|
|B)||offering cash discounts to buyers who pay their accounts promptly|
|C)||accepting IOUs from customers who buy in large quantities|
|D)||offering extended payment plans to qualified buyers|
The reason that money has a time value is _______.
|A)||inflation increases the amount you can buy with a dollar, over time|
|B)||one dollar will buy you more today than it will two years from now|
|C)||monetary systems tend to become more sophisticated over time|
|D)||a dollar received today is worth more than a dollar received yesterday|
Tulsa Enterprises has decided to build a $50 million theme park near a large metropolitan area. The company plans to sell shares of ownership to finance the development. Tulsa plans to use ___________ financing for this project.
Michigan Nursery Company offers its customers credit terms of 3/15 net 30. This means that customers can take advantage of a _______.
|A)||fifteen percent discount if they pay in three days|
|B)||three percent discount if they pay in thirty days|
|C)||three percent discount if they pay in fifteen days|
|D)||fifteen percent discount if they pay in thirty days|
Jerry is the owner of Tennessee's Treasures, a very successful framing and gift store. Suppliers are anxious to place inventory in Jerry's retail outlet. Some have offered to supply Jerry with their merchandise now, but will not require Jerry to send a payment for up to 60 days. These suppliers are allowing Jerry to take advantage of _______ credit.
Mega Bucks Bank provided HeadCase, Inc., with a long-term loan to expand operations and make its new headband for men. Although interest rates on most long-term loans of the same time period (10 years) were costing businesses 10%, Mega Bucks Bank attached a 12% interest rate on the loan to HeadCase, Inc. Which of the following is a logical reason for this deviation?
|A)||HeadlCase, Inc. is a fast growing company and as such, it gets a higher reward from the bank.|
|B)||Even though corporate loans may be at 10%, government loans are the guiding factor in interest rates.|
|C)||There is less demand for corporate loans, so the bank must increase the cost to compensate.|
|D)||The firm's credit rating and the adequacy of the collateral are determining factors.| | <urn:uuid:4132477d-3d99-43ad-800c-6716eb34cf61> | CC-MAIN-2016-40 | http://highered.mheducation.com/sites/007352459x/student_view0/chapter18/chapter_quiz.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660350.13/warc/CC-MAIN-20160924173740-00038-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.92906 | 738 | 2.6875 | 3 |
Researchers in Germany have developed "spermbots" or tiny robots that would reportedly aid sperm cells swim a lot more quickly and efficiently to the female egg cell, thereby boosting fertility, reported Huffington Post on Thursday.
According to the study published in the journal Nano Letters, spermbots are made up of micro metal motors that are meant to wrap around a sperm cell, propelling it to an egg cell at a more rapid pace. After testing the micro device on a petri dish, the researchers found that spermbots could be remote-controlled from the external environment with the use of magnetic field.
"This type of hybrid approach could lead the way in making robotic micro-systems," acknowledged Dr. Eric Diller, mechanical engineer at the University of Toronto in Canada, who was not part of the study.
According to Daily Mail on Wednesday, one out of five men have been found to have slow-swimming sperm cells, making low sperm motility the number one cause of infertility among men.
To assemble the spermbots, researchers from the Institute for Integrative Nanosciences at IFW Dresden utilized tiny magnets made out of titanium and nickel in order to create the helices of the microbot, making sure that the coil is wide enough to wrap around the sperm cell.
"We have chosen magnetic helices as micromotors because of their relatively simple mechanism of motion that is widely understood and easy to control in 3D by a common setup of axial pairs of Helmholtz coils," explained the researchers in their paper.
Once the sperm reaches the egg, it wiggles its way inside and out of the spermbot.
Researchers claim that spermbots "are not overly harmful to sperm" even in its early stages of the study, however, further tests need to be done in order to determine the safety of spermbot use in actual human subjects, and ultimately its effectivity on patients.
<iframe width="560" height="315" src="https://www.youtube.com/embed/Ww-x-VIFh-Q" frameborder="0" allowfullscreen></iframe> | <urn:uuid:6077fb80-62a4-4499-a067-f1dd421a099e> | CC-MAIN-2018-13 | http://www.youthhealthmag.com/articles/32378/20160115/spermbots-boost-fertility.htm | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646178.24/warc/CC-MAIN-20180318224057-20180319004057-00202.warc.gz | en | 0.945942 | 447 | 3.09375 | 3 |
Fire at recycling plant
Most fires at waste facilities and recycling plants start as a result of overheating or spontaneous combustion. All sorts of materials can catch fire, from solid substances to liquids. Although fires can never be completely prevented, waste and recycling facilities can do a lot themselves to prevent fire. Legioblock will gladly help to devise solutions.
Fire-resistant storage boxes
Apart from important preventive measures such as monitoring the storage, temperature and humidity of waste streams, storage boxes can also have a significant impact. They do not prevent fires as such, but they can prevent fires from spreading and thus limit the damage caused. Legioblock interlocking blocks prevent fires from penetrating and spreading and ensure that the layout of the site is fire-proof.
Major fire at German recycling plant
A major fire broke out some time ago at a recycling plant in Germany. It took the fire service two whole days to extinguish the fire completely. Because the storage boxes had been constructed with Legioblock concrete blocks to a height of six metres, the fire did not spread to other compartments. As a result, not everything caught fire and damage was limited. The fire-resistant blocks also enabled the firefighters to get close to the fire. And the blocks themselves were almost unscathed; only the top level had to be replaced. | <urn:uuid:efb42dbf-e3a7-426f-b40c-7e18a4585dd6> | CC-MAIN-2022-40 | https://www.legioblock.com/en/projecten/fire-at-recycling-plant | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00468.warc.gz | en | 0.955733 | 268 | 3.171875 | 3 |
There are two critical issues that will affect security over the next 20 years, artificial intelligence and bio-engineering (including nanotechnology). First, a few definitions:
Artificial Intelligence (AI)
AI is defined as an algorithm capable of being trained to identify live data and create activity based on that data. AI also has the capability to educate itself and to mature based on associations to which it has been exposed. In short, it is code that writes itself.
The Singularity is defined as that time in the future when the capabilities of humans and those of technology are equal. Inventor and entrepreneur Ray Kurzweil refers to the process as the law of accelerating returns. Only the technological singularity and its potential implications are examined in this article.
Bio-engineering is the application of classical engineering to biological systems. The science encompasses elements of engineering design and combines them with rigorous analysis. Biomedical engineering is an important sub-category of bio-engineering.
Nanotechnology can be described as the science of studying very small things and their potential applications across all scientific disciplines.
Chaos theory in mathematics refers to attempts to explain why very simplistic constructs with predictable results sometimes turn out quite differently than expected. In this article, the focus is on how chaos theory is applied to the world and how one seemingly simple environmental change may result in immense and unexpected results. This is sometimes called the “butterfly effect.”
Artificial Intelligence – Will Its Effects be Positive or Negative?
As with most technological change, there are pros and cons about AI. One thing is certain: AI is in its initial stage and will continue to advance over the coming decades. AI forms the basis for the idea of the Singularity. One example of AI today is the smartphone because it uses AI to accomplish its smart functions. Another development is Watson, the supercomputer developed by IBM.
These AI machines:
• Do not need rest or downtime
• They can reduce human risks by assuming dangerous and unhealthy tasks
• There is no interference in the work cycle due to emotional concerns
• Are cost-efficient because they do not need wage and benefit packages
There are also some important negative considerations, primarily that AI will create job loss. This has been a continuing trend since the beginning of the Industrial Revolution. AI could, for example, eliminate many of the entry-level jobs that keep young people gainfully employed.
Another concern is AI’s lack of empathy and human concern. It is already irritating to talk to machines that often cannot resolve an issue. In a field like healthcare, AI cannot empathize with a patient or family members.
The increasing use of AI might accelerate the current loss of important information due to hacking or machine damage. Finally, while it might seem fantastic, there is a potential that sentient machines may one day choose to make decisions on their own. Sound impossible? So did the personal computer and smartphone not so long ago.
The Potential Effects of the Singularity
The Singularity has been predicted for several years. The possibility of its arrival has been welcomed by some and feared by others. Ray Kurzweil, the director of Engineering at Google, suggested that the Singularity could occur as early as 2045. Obviously, that is merely the prediction of a well-known futurist. Any number of factors could affect that prediction or even cause the chances of it happening to lessen or be eliminated altogether.
Most would agree that technology is both a blessing and a curse. While technology has prolonged lives and restored or replaced limbs, it has also been used to create bombs and enhance the global reach of terrorism. The following is a brief overview of the technologies associated with the Singularity.
Bio-engineering certainly holds great promise for medical research. There is a possibility bio-engineering can reduce diseases in children, adults and fetuses. It can also be used in attempts to increase the human lifespan. On the negative side, bio-engineering raises ethical questions. Some critics call it playing God. Since human understanding is imperfect, bio-engineering might produce genetic defects and reduce the gene pool. The increasing use of bio-engineering raises the question whether it will be abused and go too far? Knowing human nature, it is safe to say bio-engineering will certainly be used for good and for evil.
There are many potential uses for nanotechnology in healthcare and medicine. Nanotechnology permits tiny machines to be injected into diseased human bodies to repair damage from the inside. They might also repair broken machines or search out and solve problems in hazardous environments.
An example of nanotechnology use in ophthalmology: Retinal diseases have traditionally been virtually impossible to reverse. The development of a nano-retina now holds the promise of restoring sight in some cases of blindness. Another interesting application of nanotechnology is to fight infections. There has been great concern in recent decades that the effectiveness of antibiotics is weakening. Conversely, the so-called “superbugs” that are highly resistant to many antibiotics are increasing. Could nanotechnology replace antibiotics? It is an exciting possibility.
On the flip side, technology could be used for political assassinations, crime and terrorism. Even the Rand Corporation looked into the “dark side of technology.”
In military terms, the U.S. Air Force has designed Micro Air Vehicles (MAVs) with the clear potential to protect service members. ) MAVs are tiny devices built to resemble the shapes of natural winged creatures like birds or insects. They protect by acting as virtually invisible scouts or spies that can reconnoiter an area and pass intelligence to human patrols or intelligence agencies. They can be used to perform crowd surveillance.
In the civilian sector, they may be incorporated into a facility’s security system or a private residence to provide real-time security data to the occupants or security staff. One might even envision a tiny MAV released to provide a warning of danger to a jogger starting out on an evening run in a park. MAVs could also be employed by adversaries or criminals for a variety of actions, from assassinations to surveillance to determine when a residence is empty and vulnerable to theft.
Chaos theory is both a mathematical construct and a process that can be applied to the human experience and the future. Chaos theory holds that many of the world’s greatest achievements have been accomplished when humanity is on the precipice of catastrophe.
This is often true when nations are at war. Consider how the airplane and radar emerged from WWI and WWII. Radar forms the basis of the microwave oven. This is but one example of many. It is very interesting to contemplate how chaos theory might apply to the Singularity and its effects. Humans have become more aware of the complexity and linkages between natural and artificial systems. These interactions may result in unpredictable outcomes both positive and negative.
What conclusions can we draw? The first conclusion is that we must understand the flawed nature of our species. One person might use the Singularity and associated technologies for the betterment of the species; another might see these systems as tools of war to be used for repression and manipulation. Just consider how a future Hitler or Pol Pot might use some of this knowledge.
The world stands at the edge of a new and emerging reality. It is up to each of us to manage those tools responsibly. | <urn:uuid:3e561128-98d0-4698-ad16-dae08c86d091> | CC-MAIN-2018-43 | https://futureprimate.com/2018/09/04/the-imminent-impact-of-artificial-intelligence-and-bio-engineering/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510415.29/warc/CC-MAIN-20181016072114-20181016093614-00121.warc.gz | en | 0.949418 | 1,490 | 3.265625 | 3 |
Arduino UNO R3 In Pakistan
Arduino/Genuino Uno is a microcontroller board based on the ATmega328P (datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz quartz crystal, a USB connection, a power jack, an ICSP header and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with an AC-to-DC adapter or battery to get started. You can tinker with your UNO without worrying too much about doing something wrong, worst case scenario you can replace the chip for a few dollars and start over again.
“Uno” means one in Italian and was chosen to mark the release of Arduino Software (IDE) 1.0. The Uno board and version 1.0 of Arduino Software (IDE) were the reference versions of Arduino, now evolved to newer releases. The Uno board is the first in a series of USB Arduino boards, and the reference model for the Arduino platform; for an extensive list of current, past or outdated boards see the Arduino index of boards.
You can find here your board warranty information.
You can find in the Getting Started section all the information you need to configure your board, use the Arduino Software (IDE), and start tinker with coding and electronics.
Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limits) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
Analog Input Pins 6
DC Current per I/O Pin 40 mA
DC Current for 3.3V Pin 150 mA
Flash Memory 32 KB (ATmega328) of which 0.5 KB used by bootloader SRAM
2 KB (ATmega328) EEPROM
1 KB (ATmega328)
Clock Speed 16 MHz | <urn:uuid:9be3bf58-47cb-4904-9c4e-aaf95c3f05f2> | CC-MAIN-2018-13 | https://rasheedelectronics.com/product/arduino-uno-r3 | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647671.73/warc/CC-MAIN-20180321160816-20180321180816-00146.warc.gz | en | 0.845799 | 415 | 2.890625 | 3 |
By Ron Weckerly
The debate about AD/HD is over, and the last five years have shown the truth about AD/HD. Research has shown that the Rip Van Winkle of all diseases is real, and every mainstream educational organization, medical and psychological organization have concluded so in their research.
AD/HD is a genuine brain-based medical disorder, and both adults and children benefit from appropriate treatment.
There are several facts that you need to know:
Fact 1: Within every organization such as educational, psychological and medical in the United States, research has come out stating there is such a thing as Attention Deficit/Hyperactivity Disorder.
Fact 2: AD/HD is a common and non-discriminatory disorder. AD/HD affects people of every gender, socio-economic, IQ, age and religious background. The Centers for Disease Control and Prevention report that 9.5 percent of children have been diagnosed with the syndrome. They found boys were diagnosed two to three times more than girls.
Another important aspect to the findings is the NIMH (National Institute of Mental Health) found 4.4 percent of adults 18-44 in the United States experience disability and symptoms.
ADD, AD/HD and ADHD all refer to the same syndrome … the only difference is some who are diagnosed have hyperactivity.
Fact 3: Diagnosing AD/HD is a process that is complex.
The person who is being diagnosed must have many symptoms of the disorder in work, school, and with friends in daily life. They must have the various symptoms for at least six months. What complicates behavior is many of the symptoms appear to be like extreme forms of “normal” behavior; to complicate the situation, many other symptoms mimic AD/HD. In a nutshell, every possible cause of a given set of behavior(s) must be taken into consideration.
What colors AD/HD different from other given behaviors is the persistent, excessive and pervasive behavior of the individual. The frequency, intensity and duration of behaviors are hallmark signs of the syndrome. The behaviors become evident in multiple settings and throughout life.
Not one single test confirms a person has AD/HD; diagnosticians rely on many tools to derive whether the person has AD/HD. One of the best predictors is the information he or she has about his or her environment.
Fact 4: Combined conditions
Around 30 percent or 25-40 percent of those diagnosed with AD/HD have a co-existing disorder to go along with the syndrome.
Seventy percent who have the disorder will be treated concerning depression.
Sleep disorders affect people with AD/HD two or three times more.
Fact 5: AD/HD is NOT UNKNOWN when it is untreated and undiagnosed …
• People have trouble succeeding in school and graduating.
• People have problems occur at work and lost productivity and less earning.
• Problems with relationships.
• More accidents and driving citations.
• Overeating and obesity is a problem.
• Problems with the law have been evident.
Concerning diagnosis, Dr. Joseph Biederman, a professor of psychiatry at Harvard Medical School, feels the quality of life is much better as well as saving society billions of dollars a year!
Fact 6: AD/HD is NOBODY’S FAULT.
AD/HD is NOT caused by poor parenting, family problems, poor teachers, and too much TV, too much sugar, or food allergies. AD/HD is genetic, related to specific areas of the brain.
The basic factors are: gender, family history, environmental toxins, parental risk and physical differences of the brain.
Fact 7: AD/HD treatment is highly multi-faceted.
Treatments include education, training, educational support, various types of psychotherapy and medicine.
Fact 8: Frequently, AD/HD impairments are not very noticeable until the teen-age years. When in middle school and high school, more demands are put on the executive functions of the youngster. The demands are more subtle, but disabling.
Fact 9: Medications increase the person’s alertness and improve communication of the cognitive management system.
Fact 10: Research indicates that a person with AD/HD manufactures norepinephrine and dopamine that not everyone else does.
If you think anyone in your family may have the syndrome, please contact Children and Adults with Attention Deficit/Hyperactivity Disorder (CHADD), Attention Deficit Disorder Association (ADDA), or the National Resource Center on AD/HD (NRC). You will be doing your friends, family and society a favor.
Ron Weckerly is a family man, retired teacher and nature lover who has lifelong experience with AD/HD. He is the author of Poems, Pathways and Peace: A Baby Boomer’s Journey with ADHD. His students nominated him six times to “Who’s Who Among American Teachers.”
From the Oct. 12-18, 2011, issue | <urn:uuid:2cbe3870-6f63-47de-b62e-9c2884773ecd> | CC-MAIN-2017-43 | http://rockrivertimes.com/2011/10/12/guest-column-rip-van-winkle-is-awake-and-well/?administer_redirect_43=https%3A%2Fwww.facebook.com%2Fjoyce.love.94801%3Ffref%3Dts%2F | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00728.warc.gz | en | 0.956035 | 1,034 | 3 | 3 |
A world that is increasingly worried about the atmosphere that is fast-saturated by greenhouse gases emitted by anthropogenic activities is looking for every possible means to slow down the trajectory of a warming planet.
The role of carbon-absorbing forests especially tropical rainforests in regulating climate has never been more important. Once again, protecting the remaining forested areas which lie in the territories of developing countries regained international attention and is thrust into the forefront of the battle to reverse the planetary emergency.
Forest land accounts for about 31 per cent of the world’s land surface area (4.06bil ha) with the largest proportion (45%), in the tropical domain. And of this, the oldest biome located close to the equator and found within the tropics of Cancer and Capricorn – the tropical rainforests of Amazon, the Congo Basin and Southeast Asia – are touted to be the game-changing carbon sinks of the world.
Forest-related emissions due to land use change and unsustainable logging practices contribute about one fifth of all global emissions. Therefore, protecting these remaining sinks would give humankind the fighting chance to reverse the course of runaway climate change.
Their importance gained substantial tractions as countries of the world intensify negotiations to enhance international cooperation to combat climate change following the implementation of the first phase of the Kyoto Protocol (2008-2012) in which developed countries would reduce 5% of their emissions from 1990 levels.
The role of forests as both emission sources and carbon reservoirs was acknowledged and addressed, namely with the mechanism known as Reducing Emissions from Deforestation and Forest Degradation in developing countries (REDD-plus) adopted in 2007 at the 13th Conference of the Parties to the United Nations Framework Convention on Climate Change in Bali, Indonesia.
It has to be pointed out that the pluses referring to the role of conservation, sustainable management of forests and enhancement of forest carbon stocks are significantly important for many developing countries where deforestation rates are relatively small. This means that such countries would be compensated for having kept their forests as the lungs of the world, hence REDD-plus would be an effective policy approach to prevent deforestation.
Decades of disinformation
Contrary to the disinformation surrounding Malaysia’s so-called high deforestation rate, one indisputable fact stands out. To date, more than 50% of the country’s landmass is still blanketed by forests after 63 years of post-independence nation-building.
How did a country that relied heavily on its primary resources, an economic activity carried over from the colonial period, is able to maintain so much of its forest areas amidst population growth and meeting demands for more infrastructure and pressure to extract more timber?
The answer lies in the appreciation and farsighted vision of the country’s leadership towards sustainable development.
In many Malaysia’s official statements concerning sustainable development at international fora, the byname “Rio pledge’ is frequently referenced. It is linked to the bold pledge of maintaining at least 50 per cent (16.5mil ha) of our relatively small landmass as forest.
The watershed year was 1992. The venue was Rio de Janeiro, a city in the southeastern region of Brazil on the Atlantic coast. Nearly 30 years ago, countries of the world got together at the United Nations Conference on Environment and Development (UNCED) in Rio de Janeiro and agreed that environmental protection and development are not necessarily mutually exclusive. And developing countries must not be forced to give up their rights for a better standard of living in the name of environmental protection. Basically, everyone agrees that achieving sustainable development requires balancing the three pillars – economic, environmental and social – the vital aspects of modern days societies.
As a member of the developing world, Malaysia was a vocal advocate for the rights to development, to industrialise, to eradicate poverty and to prosper. This legitimate desire to be a developed nation in the near future will inevitably mean some forested areas will have to give way for Malaysia’s economic transformation.
Nevertheless, through the years since Rio, Malaysia has adopted the world-renowned Sustainable Forest Management (SFM) practices and subjected its timber harvesting supply chain practices to third-party certification scheme. These efforts have minimized the negative impacts of timber harvesting in the production forests and protected its carbon stock. At the same time, management of specific ecological functions of forests were also enhanced through the establishment of totally protected areas as well as protective status for soil conservation and water catchment within the permanent reserve forests system. Soil protection not only preserves the fertility of soil and prevents runoff and sedimentation as part of flood mitigation, it also prevents the release of soil carbon.
Impacts of climate change are not restricted by national borders and Malaysia fully appreciates the saying ‘No man is an island’. As a member of the global community and a signatory to the UNFCCC and its supplementary treaties – the Kyoto Protocol and the Paris Agreement – Malaysia will play its part by building on the Rio pledge in terms of protecting the vital carbon sinks in our territories.
Staying true to our bold pledge
To date, no country in the world including rainforests-rich developing countries has promised the world to set aside half of their territories as a contribution to global well-being. The bold pledge in 1992 is indeed Malaysia’s generous gift to the world.
However, despite the challenges, skepticisms and doubts, Malaysia remains steadfast in upholding its Rio’s pledge. Among the disinformation is the suspicion that Malaysia counts its commodities plantation as part of the 50% forest cover and this has posed considerable challenge to the country’s palm oil industry. As of 2018, approximately 55.31% of the total land area of Malaysia was still forest areas.
It has to be clarified that in the country’s reporting to the Food and Agriculture Organisation (FAO), the UN agency tasked with monitoring the world’s forestry resources, rubber and oil palm plantations were never considered as forests nor included in the forest cover statistics. In other words, what we count as forest are the dipterocarp forests, the montane forests, the freshwater, peat and mangrove swamp forests. These are the real deals!
In addition, following the anti-tropical timber campaign in the late 1980s and early 1990s resulting in the demand for traceability of timber products, Malaysia has gallantly embraced voluntary timber certification with the establishment of an independent national scheme – the Malaysian Timber Certification Scheme (MTCS). Governed by the Malaysian Timber Certification Council, MTCS is both country-driven given the national commitment towards ensuring SFM and a market-linked tool in line with the adoption of SFM in reforming our forestry practices.
Starting off gingerly on the unchartered path of timber certification and overcoming numerous constraints, MTCS eventually gained global recognition. In 2009, it became the first tropical timber certification scheme in the Asia Pacific region to be endorsed by the Programme for Endorsement of Forest Certification (PEFC), the largest forest certification programme representing more than 300 mil ha of certified forests worldwide.
Operational since 2001, MTCS has generated 2.2mil m³ of certified timber and timber products exported to 69 destinations as of 2019. To date, more than five million hectares of forests in Malaysia are certified under the Malaysian Timber Certification Scheme (MTCS) and Forest Stewardship Council Scheme (FSC).
Tracing of timber from the forests to its end products is assured by the Chain of Custody (CoC) certification process of which 381 companies are being issued with the PEFC CoC certificates out of the total of about 3,500 timber companies in Malaysia.
The total certified forests under MTCS represents 13% of the world’s certified tropical forests, a remarkable achievement for a small developing country. In fact, under the National Policy on Biological Diversity (2016-2025), the country has set a target of 100% of all timber and timber products are sustainably managed by 2025. Yet, another bold ambition in a challenging business atmosphere amid the Covid-19 pandemic. Furthermore, little known to the global public, the world’s first tropical rainforest certified by the FSC in 1997 is the Deramakot Forest Reserve in the Borneo state of Sabah.
The decision in the early years is a testimony of Malaysia’s seriousness in implementing SFM and contribute to the trade in sustainable timber and timber products in the international market, thus ensuring that our logging industry is a positive force in terms of global forestry governance as well as living up to the challenge of sustainable development. It also puts the country in a good stead to fulfil its commitment not only to the Paris Agreement but also the Convention on Biological Diversity and the 2030 Agenda for Sustainable Development and its 17 Sustainable Development Goals.
At Expo 2020, the achievements of the forestry sector will be the shining example of Malaysia’s theme “Energising Sustainability”. Depicting the country’s seriousness and commitment to climate change mitigation and forest conservation, Malaysia will present a rainforest canopy-inspired net zero carbon pavilion. We will take the opportunities to share our success stories as a forested nation and our approaches for sustainable development.
More importantly, going forward, we seek to engage with other participating countries, businesses and the anticipated 25 million visitors in exploring new and innovative ideas and initiatives to assist Malaysia in its endeavour to continue to be the leading tropical country in conserving our forests for the shared prosperity of humankind.
Malaysia’s participation at Expo 2020, scheduled for 1 Oct. 2021 to 31 March 2022, is led by the Ministry of Science, Technology and Innovation (MOSTI), with Malaysian Green Technology and Climate Change Centre (MGTC) as implementing agency.
Article prepared by: Malaysian Green Technology and Climate Change Centre (MGTC) | <urn:uuid:258d1124-30b7-4640-9129-4d1b6e36a666> | CC-MAIN-2021-10 | https://mtcc.com.my/author/admin/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00101.warc.gz | en | 0.939147 | 2,020 | 3.875 | 4 |
Basic Proposition for a Meta-Anarchist Political Vision
Fragmentation as an alternative to consensus
Consensus is when people reach a mutual agreement on a given issue. Anarchy loves consensus. Actually, some anarchists seem to believe that it’s the only acceptable form of decision-making.
Unfortunately, consensus has its limitations. The capacity for consensus drops rapidly as the size of the group increases; it usually takes a lot of time to reach one; not everyone are fans of participating in sophisticated and lengthy discussions on every single issue. Those are all usual objections to anarchism — and arguments for top-down governments. That is, states.
States rely on the principle of one-for-all decisions. They claim it as an inevitable consequence of large-scale organization — because it’s not like an actual consensus can be reached between millions of people, right?
But you don’t need a consensus for millions of people.
Sometimes, though arguably so, there is an actual necessity to enact a decision that has to be shared by a group of people this large. Perhaps, some kind of an issue of planetary scale — akin to climate change.
However, this happens drastically less often than the state strives to present. There are far less groups that actually require a commonly shared decision. The state operates in this manner not because of some “natural necessity”, and not because it has the best interests of society in mind, but to impose a type of governance that is more convenient for the state itself. Unified systems are far more compatible with centralized governance.
Being used to this approach to decision-making as the “natural” one, some of us may unconsciously transfer it onto our ideas of a freer society. That’s kinda what happened to “representative democracy”: all residents of the country are obliged to express the enormous aggregate of its collective will through the bottleneck of a single ruler, a single set of rules, a single voting act. Although some things have changed, the structure of governance remained similar to that of a monarchy. No wonder this kind of democracy appears hardly functional to some people.
If convenience for society (rather than convenience for top-down governments) is a priority for us, another type of decision-making comes to the forefront. “Fragmentation”.
What’s that? It’s a type of decision-making which implies that different people should be able to enact different decisions regarding how they want to live — and how they want to organize their habitat. It also implies that their coexistence can be coordinated within prescriptions of mutually shared protocols.
The principle of fragmentation is highly prominent in the open-source software industry — known there as “forking”. But can we apply it to political organization?
Let’s look at examples.
Residents of a neighborhood disagree on whether a road must be built in the place of a local garden. To resolve this dispute, one could use conflict resolution techniques, reconcile both sides and achieve consensus on the basis of a compromise. One could also find new, nontrivial solutions which would satisfy everyone. If there are no resources for such luxury — perhaps, act in accordance with the opinion of the majority. But no matter what type of method we apply here, this would still remain a one-for-all solution. In this particular case, technologies of our time leave us with no alternative approaches to this problem. So, here fragmentation is inapplicable.
Here’s another example. Residents of a town believe that publicly smoking weed is harmful to everyone in that town. Residents of another town believe that publicly smoking weed is harmless, and maybe even beneficial to the smokers. Now, you can fragment the decisions at the regional level. Weed-haters can prohibit smoking in their town, and weed-lovers can allow it in theirs. In this situation, fragmentation is preferable.
All that’s required is a shared protocol to which both towns subscribe, and which postulates that separate towns can make their own policies on weed-smoking. Let’s call this an “interpolity protocol”. If necessary, it can be a formal protocol of explicit mutual agreement — otherwise it’s an informal protocol of “we just kinda let each other do our own thing”. It’s the “agree to disagree” principle but applied to actual political organization.
Now, in some countries, a similar system of decisional fragmentation already exists, relying on high degree of regional autonomy. But it is confined by strict borders of states or regions. It is still subject to central authority. It responds very poorly to the shifts of opinions of local residents. Whereas fragmentation can take place to a practically infinite degree — all the way to individuals (or even subpersonalities). It can allow for much more dynamic and adaptive political organization, converting on-the-spot decisions into societally recognized ones almost “in real time”. That is what’s called “free flow of desire” in Deleuzian jargon.
On the other hand, the idea that a large group has to partake in a single uniform decision implies suppression of some portion of people with differing interests, e.g. with differing desire. This is usually followed by delegitimization of those interests with a variety of justifications: they’re non-citizen, they’re unpatriotic, they’re privileged, etc.
Dictatorship of the few, rule by the majority and consensus are all different methods of coordination within a single architecture of a cohesive decision-making network. Consensus is, of course, the most anarchist option within this particular architecture.
But it is possible to invent anarchist methods of decision-making on the basis of different architecture. For example, networks with high degree of fragmentation. In those kind of networks, tightly interconnected areas — one may call them “assemblages” by Deleuzian terminology — are intermitted by loosely connected “gaps” between different areas/assemblages.
As a matter of fact, the technology of consensus in a large anarchist society could function only alongside the technology of fragmentation. In fully connected areas/assemblages, consensus is most likely to occur. Those areas/assemblages could be communities of people with similar values — as well as self-sovereign individuals. The fact that there are gaps of lesser cohesion between those areas/assemblages allows every area/assemblage to choose its own paths of existence and development.
Virtual polities and the Collage
What would be the most effective political framework for fragmentation? I’m not sure there can be a decisive answer to that question right now, but we can already start trying to outline and implement such a framework.
Imagine virtual polities, which act as decentralized law providers for all who wish to use their services. Users of different virtual polities are subject to different sets of restrictions which they themselves deem preferable. They often form localized communities, because it is more convenient to interact with users who have the same providers. But the providers are not subject to strict borders. Virtual polities are not territorial in themselves, although they can condense on certain territories. Somewhere, focal points emerge which gather users only of a particular virtual polity, while elsewhere users of many different systems settle alongside each other. Relations between users of different virtual polities are coordinated by networks of interpolity protocols of varying scale, formal and informal. Virtual polities can exist within other virtual polities; they can be of any size and shape; they can intermingle, intercross, conjoin, dissociate and divaricate.
Let’s call this system a “Collage”. A meta-anarchist Collage, if you want to be particularly precise. A political system of maximized self-determination. It is also a system that is entirely emergent and self-organizing, and with no central authority whatsoever. In other words, it is a confederated system.
The Collage implies that any kind of communities are possible. Including those which misalign with your personal values. Including ethnonationalist enclaves — as well as communes where nationalists are strictly not welcome. As well as resorts for synthetic drug enthusiasts exclusively; or a town for people who want to create a completely functional furry society; or a medieval city with guilds and knighthoods; or a primitivist hunter-gatherer reserve; or whatever society you would like to live in.
With that said, we can hypothesize that highly isolated gatherings of think-alike extremists will probably be a rare occurrence in the Collage. Although, the possibility for such gatherings will already resolve a huge amount of social tensions. But for most people it’ll be probably more preferable and sustainable to live in “conservative” polities with basic anarchist norms of decentralization and self-governance; plus moderate fragmentation based on minor disagreements. By “conservative” I mean only “preserving some set of values” (in this context, anarchist values), and not the values which you associate with that word.
The extremist polities, in turn, could serve as “political frontiers” of consensual courageous experimentation, allowing the Collage as a whole to try out new unusual sociopolitical frameworks. This will allow for non-coercive societal evolution — in contrast to societal evolution as we know it, which happens by violent confrontation between progressive and conservative groups, as well as mutual coercion and struggle for power over others. Now, some leftists call it “the dialectical process of history”, but I prefer to call it “a redundant apparatus of surplus suffering”.
On the other hand, we can’t really predict what the Collage will look like. Maybe it will be an infinitely varying smorgasbord of distinct worlds — rather than an assemblage consisting of a “conservative” core and “extremist” periphery as described above. Maybe it will be both of those systems existing as neighboring self-sufficient Collages. Maybe it will be something entirely unimaginable from today’s perspective. In any case, historical determinism is structurally fascistic. Remember — we’ll know it only when we get there.
Alterprise and forks of freedom
Now, don’t mistake this model for simple voluntaryism. It’s not like we can just get rid of the state and the free market will instantly arrange us into peaceful consensual autonomies. Actually, the global capitalist market in its current form, if met with no resistance, would most likely devour and suppress any attempts at forming a plural meta-anarchist network of autonomies.
The Collage must evolve independently, gradually and organically: by many different people trying out many different approaches at the same time to see what works, and synthesizing those approaches together to achieve increasingly large-scale solutions. This is how all functional societal systems emerge — through gradual evolution, not meticulous planning.
But if so, how can we foster such a system into existence?
First and foremost, everyone should have the right to create their own political project and invite anyone who wishes to participate. Let’s call this activity “alterprise” — enterprise of alternatives. So, we need to acknowledge the right to alterprise. Doing alterprise should be as easy as doing a commercial start-up in a country that is oriented towards supporting emerging small businesses.
A lot of nasty things can be said about “unbridled capitalism” — and I already mentioned some — but it can’t be denied that market dynamics can be used as a great tool for rapid systemic development. A market, when organized properly, is essentially a technology of synthetic evolution: people offer their products on the marketplace; “good” products gain audience, while “bad” products wither away; the cycle repeats.
Now, what’s defined as a “good” or a “bad” product is entirely circumstantial. Also, what the product or its providers gain by attracting audience also varies between different types of markets. In modern capitalist markets, a product’s success results in financial gain for its provider and consequent concentration of power in that provider’s hands. In a marketplace of organizational systems, the product and its author are rewarded by implementation, as well as investment of personal resources by willing participants. Think of it as free, collaborative, open-source political system development.
Similarly, evolution is not just some improvement. It is improvement of performance of certain tasks. And the tasks which define the evolutionary selection may be any kind of tasks. In order for political evolution to not result in optimization of totalitarian and centralized systems, certain criteria of selection must be configured. Firstly, a market demand for anarchist systems must emerge — anarchist systems must prove themselves to be a better political product. Secondly,
Metaanarchy is a panarchy, but not every panarchy is metaanarchy
Some of you may have heard of a similar political idea — called panarchy. At first glance, it seems identical to the meta-anarchist vision — a plurality of political systems between which people can freely choose. However, I believe there is an important distinction to be made.
A panarchist system does not necessarily rely on anarchist principles of autonomy and self-governance. It can as well be a plurality of top-down governments with territorial sovereignty, akin to Moldbug’s Patchwork. A panarchy is a marketplace of political systems, but, once again, they can be any kind of political systems— for example, dystopian dictatorships with no actual alternatives.
Strong anarchist institutions and principles are crucial for facilitating the free flow of political desire. The Collage must have widespread reliable instruments of direct bottom-up political agency— whether based on markets, on direct democracy, on blockchain, or on something else. Without such instruments, the Collage will devolve back into statehood.
An advanced meta-anarchist society may afford to have polities with high risk of coercion — voluntary kingdoms or warrior cultures, for example — but the systemic core of the Collage must remain anarchical in order for the Collage to remain extant.
Meta-X. Baseline protocol for the Collage
Interpolity protocols of varying scale are the glue of the meta-anarchist Collage. To ensure freedom and flexibility, polities must have the ability to agree on their own protocols. Successful protocols are then shared and adopted by other polities. However, as with software development, it might be much more convenient to work out a basic protocol, on top of which all other protocols will be layered.
What should a baseline protocol for the Collage look like? It probably shouldn’t have a lot of rules. It shouldn’t interfere with the logic of individual rule systems, but it should prioritize personal liberty of individual people over local rules. For example: anyone who wishes to stop playing by the rules of a given polity must have the ability to leave it, and they can’t be held back against their will. Or, even better — anyone, regardless of physical location, can instantly switch to a different law provider at any moment and, by that, immediately become positioned within its jurisdiction.
Baseline protocols may themselves be plural and subject to evolution. Actually, most likely they will. The more successful a given baseline protocol is, the more parts of the Collage will adopt it.
In that case, the global level of affairs will be managed by more informal conventions — or even with no explicit conventions at all, but by emergent swarm-like and stigmergic organization. The informal approach seems to be the most optimal for large-scale interrelations, as it arises naturally from the global balance of interests. Today, a major portion of international relations operates in a similar fashion. This approach involves far less algorithmization — specific situations and precedents become significantly more substantial.
At the current moment, this is mostly a vague fantasy. A questionable utopian proposition. Surely, many problems and peculiarities, not addressed in this text, will arise in practice. However, certain meta-anarchist tendencies are already present in our day and age: autonomous zones, blockchain-driven decision-making, federated social networks, micronations, private and charter cities, democratic confederalism of Rojava, free and open-source software — and many more.
The Collage assembly process has already started — but if not properly facilitated, it will be dissolved and defeated by more totalitarian and structurally fascistic tendencies: social credit systems, state police militarization, mass surveillance, usage of AI and big data for top-down control, automation of coercion, and so on.
To prevent this and ensure the emergence of the Collage, we need to continuously network meta-anarchist tendencies together; start up our own alterprises and personal utopias; create forks of existing projects; make political and ideological innovations; and be ready to encounter fierce resistance from the systems of status quo.
If we somehow succeed, we may suddenly find ourselves on a planet that is a flourishing playground of chaotic consensual experimentation and constant exploration of existential possibilities; a world of unimaginable variance and beauty; a world of thousands of ontological frontiers.
I’d say such a world would be worth the struggle. | <urn:uuid:4a46e549-08d3-41d6-bf8a-2d514510e17c> | CC-MAIN-2023-06 | https://theanarchistlibrary.org/library/negligible-forces-collage | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00247.warc.gz | en | 0.937995 | 3,624 | 2.546875 | 3 |
- 1 What is it called when you see something that isn’t there in the desert?
- 2 What’s the difference between a hallucination and a mirage?
- 3 Why do we see mirages in the desert?
- 4 What causes a mirage?
- 5 What is it called when you hallucinate in the desert?
- 6 What does parched mean?
- 7 Can you take a photo of a mirage?
- 8 Why do mirages disappear as you get closer?
- 9 Is Mirage an illusion?
- 10 Do animals see mirages?
- 11 Why do mirages look like water?
- 12 What is the meaning of mirages?
- 13 Is a mirage a real or virtual image?
- 14 Is mirage caused by total internal reflection?
- 15 How is primary and secondary rainbow difference?
What is it called when you see something that isn’t there in the desert?
The definition of a mirage is an optical illusion, something that you believe you see but that isn’t really there. An example of a mirage is when you believe you see water or a ship in the desert when it isn’t really there.
What’s the difference between a hallucination and a mirage?
A hallucination is when you see something that doesn’t actually exist, while a mirage is a real thing you just happen to see in the wrong location.
Why do we see mirages in the desert?
Mirages happen when the ground is very hot and the air is cool. The hot ground warms a layer of air just above the ground. When the light moves through the cold air and into the layer of hot air it is refracted (bent). A layer of very warm air near the ground refracts the light from the sky nearly into a U-shaped bend.
What causes a mirage?
Mirages are a direct result of photons taking the path of minimum time in vertical temperature gradients. Ideal conditions for a mirage are still air on a hot, sunny day over a flat surface that will absorb the sun’s energy and become quite hot.
What is it called when you hallucinate in the desert?
Instead, mirages are the result of bending light rays which, when they come across cold or hot air, can produce the distorted “images” you see. They can occur in a few different, notable ways: Water in the desert.
What does parched mean?
: deprived of natural moisture parched land also: thirsty He was parched after the long hike.
Can you take a photo of a mirage?
Yes! A Mirage can be photographed. Mirage is nothing but an optical illusion that occurs due to the refraction and total internal reflection of light. Mirages could be seen where the land is heated up and the air is cooler, which happens mostly during the summer afternoons.
Why do mirages disappear as you get closer?
And the closer you get to that water, the more the mirage disappears. This is because an optical illusion is occurring. The mirage that we see during this time is light reflecting and refracting off the hot air that is bouncing, rising and moving around, which is why it appears to look like liquid.
Is Mirage an illusion?
People sometimes label a mirage as an illusion. But, in fact, a mirage is not an illusion. Your mind creates an illusion. A mirage can be explained by the physics of Earth’s atmosphere.
Do animals see mirages?
The interesting truth is that, animals do perceive and in many instances, they believe their perceptions to be true. During summer months, animals search for water. When they see a mirage, they run towards the direction of the optical illusion, thinking they might find water over there.
Why do mirages look like water?
When light rays from the sun reach this air pocket just above the road, the speed of the photon increases slightly, causing its path to alter, or bend from an observer’s point of view. This makes something that looks like a puddle of water appear on the road.
What is the meaning of mirages?
1: an illusion sometimes seen at sea, in the desert, or over hot pavement that looks like a pool of water or a mirror in which distant objects are seen inverted 2: something illusory and unattainable like a mirage.
Is a mirage a real or virtual image?
In contrast to a hallucination, a mirage is a real optical phenomenon that can be captured on camera, since light rays are actually refracted to form the false image at the observer’s location.
Is mirage caused by total internal reflection?
Mirage is an optical illusion caused by the phenomenon of total internal reflection of light. As the light get refracted it reaches to a point where the light tends to form 90 degree angle. No more refraction takes place when it reaches 90, besides all the light get reflected back.
How is primary and secondary rainbow difference?
Primary Rainbow The light path involves refraction and a single reflection inside the water droplet. If the drops are large, 1 millimeter or more in diameter, red, green, and violet are bright but there is little blue. The secondary rainbow involves two reflections inside the falling droplets. | <urn:uuid:f2912e27-b371-46eb-bef4-ca39fe726d00> | CC-MAIN-2022-05 | https://www.cakerholic.com/recommendations-from-the-pastry-chef/readers-ask-when-you-are-in-the-dessert-and-start-seeing-things.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00666.warc.gz | en | 0.930713 | 1,129 | 3.140625 | 3 |
The latest news from academia, regulators
research labs and other things of interest
Posted: Aug 01, 2017
Ferroelectric phenomenon proven viable for oxide electrodes, disproving predictions
(Nanowerk News) Flux-closure domain (FCD) structures are microscopic topological phenomena found in ferroelectric thin films that feature distinct electric polarization properties. These closed-loop domains have garnered attention among researchers studying new ferroelectric devices, ranging from data storage components and spintronic tunnel junctions to ultra-thin capacitors.
(a) FCD domains in the PTO layer with symmetric oxide electrodes. (b) Alternating current domains in the PTO layer with asymmetric oxide electrodes. (Image: Shuang Li and Yinlian Zhu)
Ferroelectric materials are typically developed and studied as thin films, sometimes as thin as only a few nanometers. As a result, researchers have begun discovering the abundant domain structures and unique physical properties that these ferroelectrics possess, such as skyrmion and FCD formation that could benefit next-generation electronic devices. Because the films are so thin, however, their interaction with electrodes is inevitable.
"The general thinking has been that oxide electrodes would destabilize flux-closure domains. However, our work has shown that this is no longer true when the top and bottom electrodes are symmetric, which physically makes sense," said Yinlian Zhu, professor at the Institute of Metal Research at the Chinese Academy of Sciences and a co-author of the paper.
Zhu and colleagues used two types of oxide electrodes: one based on strontium ruthenate, the other based on lanthanum strontium manganite, chosen as oxide electrodes because of their similar perovskite structures, which work well in layer-by-layer film growth. They studied how these electrodes influenced FCD formation in PbTiO3 (PTO) perovskite-oxide-based thin films deposited on gadolinium scandium oxide (GSO) substrates.
The research team's previous studies indicated that flux-closure domains can be stabilized in strained ferroelectric films in which the strain plays a critical role in the formation of flux-closure domains, such as multilayer PTO/strontium titanate systems grown on GSO-based (specifically GdScO3) substrates.
Based on their previous studies, the researchers consequently anticipated that similar phenomenon might also occur in PTO/electrode systems. They then grew PTO films sandwiched between symmetric oxide electrodes on GSO substrates using pulsed laser deposition.
They found that periodic FCD arrays can be stabilized in PTO films when the top and bottom electrodes are symmetric, while alternating current domains appear when they apply asymmetric electrodes.
"We successfully grew ferroelectric thin films with symmetric oxide electrodes in which flux-closure domains and their periodic arrays clearly do exist," Zhu said. "Our work sheds light on understanding the nature of flux-closure domains in ferroelectrics. We expect that it will open research possibilities in the evolution of these structures under external electric fields." | <urn:uuid:6ea6243b-ff9d-4f85-b061-93e2114057c7> | CC-MAIN-2017-39 | https://www.nanowerk.com/nanotechnology-news/newsid=47628.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687938.15/warc/CC-MAIN-20170921224617-20170922004617-00509.warc.gz | en | 0.913056 | 650 | 2.640625 | 3 |
microRNAs and small RNAs
Types and functions of small RNAs
siRNA – small non-coding RNA that regulates coding genes and is part of antiviral defence.
miRNA- (microRNA) regulates protein coding genes
piRNA- (piwi interacting RNA) transposon silencing RNA.
RNAi- a non-coding antisense RNA that functions in experimental studies, viral resistance, genome stability through keeping mobile elements silent, keeping chromatin condensed preventing transcription and repressing protein synthesis.
miRNA processing pathway
Genes encoding miRNA are found in the DNA. dsRNA is recognised by the protein DGCR8 leading to the enzyme Drosha to associate and catalyse the cleavage of the double stranded hairpin loop structure, cutting the RNA into smaller precursor miRNA. This allows the miRNA to be transported into the cytosol. Once in the cytosol, dicer recognises and attaches to the RNA. Dicer is a RNase protein that cleaves the terminal stem loop to form an ever-shorter molecule. TAR RNA binding protein is the cofactor. Then Argonaut protein interacts and forms the RISC complex (RNA induced silencing complex.) miRNA is unwound and one strand is released. Remaining miRNA is guided to the target sequence (with the help of RISC), this is energy dependent and chaperone mediated. Once miRNA reaches its target, it binds by complementarity and prevents the mRNA from being translated as binding causes degradation of mRNA.
Similarities and Differences between small RNA processing pathways
Both use dicer
Both use Argonaut
Both form RISC complex
Only one strand of dsRNA is actually used to silence
Both begin as dsRNA
Both are synthesised by RNA polymerase II
End result: miRNA bind to target mRNA, prevent its translation and cause its degradation. siRNA bind to target and cleave it.
miRNA are endogenous, so the processing pathway begins in the nucleus. siRNA are exogenous and so processing begins in the cytoplasm
miRNA are short hair pin loop structures, siRNA are not
Physiological functions of small RNAs
stem cell development
cell cycle control
some are suppressors of cancer, some support cancer
miRNA resemble small interfering RNAs of the RNA interference pathway.
The human genome may encode over 1900 miRNAs.
In the early 1990s the first miRNA was discovered
Gene regulation is essential as it increases adaptability of organisms by allowing the cells to expression proteins when needed.
Genes can be regulated at:
Modifications to DNA can be structural and chemical. Histone modification can be phosphorylation, ubiquitination, acetylation or methylation. Depending on whether chromatin is opened or closed will determine whether transcription takes place. Open chromatin allows for transcription. The histone code hypothesis suggests that chemical modifications made to histones can influence the conformation of chromatin and thus influence transcription.
mRNA processing is highly important. mRNA must be capped, be cleaved, spliced and polyadenylated. Capping is important for protection of mRNA from degradation, allowing for the mRNA to leave the nucleus, for recognition of mRNA being different to other RNAs and to allow for ribosome binding during translation. | <urn:uuid:321324a2-c894-4b41-aaeb-b085bf392ab8> | CC-MAIN-2019-30 | https://www.kingsnews.org/articles/mirna-and-sirna | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00532.warc.gz | en | 0.892867 | 685 | 3.484375 | 3 |
Recycling: Simple as 1-2-3
Know what to throw
Cardboard, paper, metal cans, plastic bottles and jugs.
Empty. Clean. Dry.™
Keep all recyclables free of food and liquid.
Don't bag it
Never put recyclables in containers or bags.
Paper & Cardboard
Flattened cardboard, newspapers, magazines, office paper and common mail.
Beverage and food cans.
Plastic Bottles & Jugs
Food and liquid containers with the lids on.
No Soiled or Wet Materials
Just one dirty bottle or item can contaminate the contents of a whole recycling truck. Once cardboard or paper comes into contact with food or liquid, it can no longer be recycled.
Don’t Bag or Contain
No bags go in the recycling container, and never put recyclables in bags or containers.
No Connected or Mixed Materials
When two or more materials are connected, like paper envelopes with plastic bubble wrap inside, the items can’t be recycled.
What Your Recycling Container Should Look Like
If you know what to look for, it's easy to see if you're on track. Compare your recycling container to the pictures below. Yours should resemble the one on the right. Want to learn more? Visit our Residential Resources page for printable reminders that can help everyone in your house become better at recycling. | <urn:uuid:a9b965b8-1b8a-41a4-91af-2fc611635680> | CC-MAIN-2022-49 | https://recyclingsimplified.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00852.warc.gz | en | 0.832088 | 308 | 3.359375 | 3 |
Definition of Monokines
1. monokine [n] - See also: monokine
Medical Definition of Monokines
1. Soluble mediators of the immune response that are neither antibodies nor complement. They are produced largely, but not exclusively, by monocytes and macrophages. (12 Dec 1998)
Lexicographical Neighbors of Monokines
Literary usage of Monokines
Below you will find example usage of this term as found in modern and/or classical literature:
1. Understanding the Immune System by Lydia Woods Schindler (1991)
"... the monokines produced by monocytes and macrophages, are diverse and potent chemical messengers. Binding to specific receptors on target cells, ..."
2. Biologic Markers in Pulmonary Toxicology by National Research Council (U.S.). Subcommittee on Pulmonary Toxicology (1989)
"Growth Factors and monokines In addition to their role as the primary phagocytes in the lung, AMs synthesize diverse substances that exhibit a broad range ..."
3. The Neuroscience of Mental Health: A Report on Neuroscience Research edited by Stephen H. Koslow (1997)
"... their initial idea was that one cell type would make one class of cytokine—lymphocytes would make lymphokines, monocytes would secrete monokines, ..." | <urn:uuid:9a438a62-94ff-4bc8-8030-7fe2197f33a6> | CC-MAIN-2019-22 | https://www.lexic.us/definition-of/monokines | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00264.warc.gz | en | 0.919673 | 282 | 3.25 | 3 |
As we all know that everyone feels fit to be better with their personality and to look good, even then, obesity is becoming a big problem in life nowadays. Everyone thinks he looks good. And everyone feels that his personality is a good personality. And it is very important for you to keep fit for all these things. And in such a situation if your weight is high then you are required to reduce this weight. And if you are tired of picking an extra burden of your obesity, then you want to lose weight. In this article, we will tell you ways to reduce obesity that will help reduce obesity.
While reducing obesity, we have often heard that when food is eaten but it is not that we have to leave food to reduce obesity. For this, we have to control our food, but we have to change something in our life style. We all know that excessive obesity is harmful to our body because more obesity leads to diabetes-like illness. We also have high blood pressure due to obesity. And there are diseases like heart disease as well. In such a way, it is very important for us to keep a healthy body. Keeping obesity in control is to lose weight.
Measures to reduce obesity
We always know that food is a big enjoyment of obesity for us. Dieting properly can be the reason for our obesity to increase weight. So in such a situation, it becomes necessary for us to eat what we should eat and what should not be eaten. First, we should eat low calorie food while paying more attention to drinking food. As much as our calorie body uses, so many calories we should eat less.
Reduce obesity with water
Water is a very good solution to reduce obesity. Water is very useful for our body and water is also a great way to reduce obesity, so that you should consume more water. There are many chemicals in the water that are harmful to our body. Water plays an important role in our eating and drinking. Try to reduce obesity and drink a glass of water every 1 hour or 2 hours. It has 2 benefits for our body. The first advantage is that we will not be hungry again and the second benefit happens in such a way that water helps in reducing the west in our body.
How to loot weight with lemonade
Lemonade is a very good solution to reduce obesity. We can also reduce obesity with lemonade. Although we can drink lemonade anytime but be careful to lose weight, if our stomach is empty then if we drink lemonade, our obesity helps in decreasing. And if you drink it a bit by heating it and squeezing lemon in it and then drink it is even better.
Reduce obesity with green vegetable salad and soup
You use green vegetables in your diet as much as possible. If you use cauliflower in the food, you get more benefit in reducing obesity because it contains fewer calories. And whenever you eat, take a tomato and mint salad with food, it is a very good way to reduce fatness.
Drinking coconut juice and seasonal juice while weights loose is also very good for reducing obesity. Because most soft drinks and cold drinks increase the weight of our body. Better if you drink natural juice, then it is good for us and our obesity is less than this. Keep in mind that you do not drink much sugar juice.
Reduce obesity with honey (honey)
Honey is very useful. There are very many properties in the city which are beneficial to us to reduce our obesity. It should be consumed by adding 2-3 spoons of honey every day to the water. And also add lemon juice along with honey in water. This is also a good solution for reducing obesity
You eat as much as you can, and avoid avoiding nonzero, because vegetarian food also plays an important role in reducing our obesity. And maybe you can not completely eat non-veg food, then try to eat less. Because this is a very good way to reduce obesity. As well as in many research, it has also been found that eating non-veg increases obesity. | <urn:uuid:746e10be-a2c9-4bc0-b3d1-db480f0e07b0> | CC-MAIN-2020-34 | http://buyphentermine.net/how-to-loss-weight-tips-for-reducing-obesity/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00121.warc.gz | en | 0.979309 | 827 | 2.703125 | 3 |
A microcontroller is like a super tiny computer that typically runs a single program. With rewritable flash memory the modern microcontroller boards we stock like Arduino can be reprogrammed many times to experiment and play. The boards provide all components to support the main processor chip and connecting external components like sensors via headers or solder holes is simple.
Robot controllers are specialized microcontroller boards that normally include motor drivers and provide for easy connection of servos.
Input-Output IO Boards allow you to offload the processing of input from sensors or output to other devices and allow for connection of many more devices than a microcontroller may support by itself. | <urn:uuid:f598bd38-6fba-4960-a0b5-78335be070ec> | CC-MAIN-2017-13 | https://www.robotgear.com.au/Category.aspx/Category/41-Microcontrollers-and-IO-Boards | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00576-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.90248 | 130 | 3.28125 | 3 |
THE CENTRALITY OF THE INVISIBLE HAND
By Mark Skousen
Lecture, Center for Constructive Alternatives
Hillsdale College, January 31, 2012
“Adam Smith had one overwhelmingly important triumph: he put into the center of economics the systematic analysis of the behavior of individuals pursuing their self-interest under conditions of competition.”– George Stigler
A major debate has flared up recently about Adam Smith. Was he the father of free-market economics and libertarian thought, or some kind of radical egalitarian and social democrat?
Adam Smith as a Free-Market Hero
The traditional view, held by Milton Friedman, is that the Scottish philosopher was “a radical and a revolutionary in his time–just as those of us who preach laissez faire are in our time.” He lauded Smith’s metaphor of the “invisible hand,” the famous Smithian idea that “by pursuing his own self interest, [every individual] frequently promotes that of the society.” According to Friedman, “Adam Smith’s flash of genius was his recognition that the prices that emerged from voluntary transactions between buyers and seller — for short, in a free market — could coordinate the activity of millions of people, each seeking his own interest, in such a way as to make everyone better off.” Other defenders of free-enterprise capitalism describe the invisible hand as “gentle,” “wise,” “far reaching,” and one that “improves the lives of others.”
George Stigler, Friedman’s colleague at the University of Chicago, identified the invisible hand doctrine as “the crown jewel” and first principle of welfare economics, “the most important substantive proposition in all of economics.” He waxed eloquently about the “grandparent” of modern economics, his “bold explorations, his resourceful detective work…, his duels and triumphs and defeats…. [his] superior mind…a clear-eyed and tough-minded observer…The Wealth of Nationshas joined the great literature of all time; it was the most powerful assault ever launched against the mercantile philosophy that dominated Western Europe from 1500 to 1800.” Adam Smith was Stigler’s favorite economist, and a portrait of the Chicago economist holding a copy of The Wealth of Nations hangs in the hall way of the business school at Chicago.
This is one area where the Austrians Ludwig von Mises and Friedrich Hayek concurred with the Chicago school. Like Stigler, Mises wrote an introduction to The Wealth of Nations, calling it a “great book.” According to Mises, Smith’s works are “the consummation, summarization, and perfection…of a marvelous system of ideas…presented with admirable logical clarity and an impeccable literary form…. [representing] the essence of the ideology of freedom, individualism, and prosperity.” Furthermore, “Its publication date — 1776 — marks the dawn of freedom both political and economic….It paved the way for the unprecedented achievements of laissez-faire capitalism.” He concluded, “There can hardly be found another book that could initiate a man better into the study of the history of modern ideas and the prosperity crated by industrialization.”
In like manner, Hayek wrote a laudatory article on the 200th anniversary of the publication date of The Wealth of Nations. After praising earlier economists and warning of defects in Smith’s value and distribution theories, he went on to extol Smith as “the greatest of them all” because the Scottish economist, more than any of his contemporaries or ancestors, recognized that “a man’s efforts will benefit more people, and on the whole satisfy greater needs, when he lets himself be guided by the abstract signals of prices rather than perceived needs,” and thus Smith helped to create a “great society.”
Social Democrats Contest the Free-Marketeers
Critics of laissez faire — from Cambridge economist Emma Rothschild to British Labor Party leader Gordon Brown — have recently become quite unhappy by what they consider a conspiracy by free-marketeers to claim Adam Smith as their hero and symbol of laissez faire. They seem to be especially annoyed that the Adam Smith Institute, a London-based free-market think tank, raised a popular statue of the grand old man on Mile High Street in Edinburgh on July 4, 2008.
In a series of books and articles, they have attempted to wrestle Adam Smith out of the hands of the free-market arena and into the camp of the social democrats. According to Oxford Professor Iain McLean and Illinois Professor Samuel Fleschaker, the Scottish philosopher was a “radical egalitarian” who, while endorsing economic liberalism, had a lively appreciation of market failure and ultimately rejected “ruthless laissez-faire capitalism” in favor of “human equality” and “distributive justice.” These revisionists are quick to claim that Smith was no friend of rent-seeking landlords, monopolistic merchants and conspiring businessmen, and that he advocated an active state authority in support of free education, large-scale public works, usury laws, progressive taxation, and even some limits on free trade. They contend that Smith had more in common with Karl Marx than Thomas Jefferson.
The critics of laissez faire offer a mixed review of Smith’s invisible hand. In their Keynesian textbook, William Baumol and Alan Blinder admit that “the invisible hand has an astonishing capacity to handle a coordination problem of truly enormous proportions.” Despite expecting anarchic chaos, Frank Hahn discovers spontaneous order in Adam Smith’s market place. He honors the invisible hand theory as “astonishing,” noting “whatever criticisms I shall level at the theory later, I should like to record that it is a major intellectual achievement….The invisible hand works in harmony [that] leads to the growth in the output of goods which people desire.”
And yet despite these words of praise, Smith’s wonderful world is full of inefficiencies, waste, and imperfections. Accordingly, the public must beware of the “backhand,” “the trembling hand,” the “bloody hand,” the “iron fist of competition,” a hand “getting stuck,” and perhaps even a hand that may need to be “amputated.”
To emphasize the imperfections of the market place, mainstream publishers have mostly assign big-government advocates to write the introductions to the popular editions of The Wealth of Nations, including Marx Lerner and Robert Reich for the Modern Library editions, and Alan B. Krueger for the Bantam paperback edition, where he labels Adam Smith as a follower not of Milton Friedman but of John Rawls; his invisible hand is seen as “all thumbs.”
Murray Rothbard’s Dissent
The political waters have been muddled a bit since libertarian Murray Rothbard and his followers have joined the critics in their attack on Adam Smith (one of a few examples where Rothbard departs company from Mises and Hayek). Rothbard took exception to the celebrated Adam Smith in his two volume history of economic thought, published at the time of Rothbard’s death in 1995. He lambasted the classical economists, arguing that Smith apostatized from the sound doctrines and theories previously developed by pre-Adamites such as Richard Cantillion, Anne Robert Turgot, and the Spanish scholastics. He asserted that Adam Smith’s contributions were “dubious” at best, that “he originated nothing that was true, and that whatever he originated was wrong,” and that The Wealth of Nations was “rife with vagueness, ambiguity and deep inner contradictions.” Specifically, his doctrine of value was an “unmitigated disaster”; his theory of distribution was “disastrous”; his emphasis on the long run was a “tragic detour”; and Smith’s putative “sins” include support for progressive taxation, fractional reserve banking, and a crude labor theory of value that Marxists later borrowed from Adam Smith and David Ricardo.
Adam Smith Reveals the Invisible Hand
What about the metaphor of the “invisible hand,” the famous Smithian idea that “by pursuing his own self interest, [every individual] frequently promotes that of the society”? Free-market economists from Ludwig von Mises to Milton Friedman have regarded it as a powerful symbol of unfettered market forces, what Adam Smith called his “system of natural liberty.” In rebuttal, the new critics belittle Adam Smith’s metaphor as a “passing, satirical” reference and suggest that he favored more of a “helping hand.” They emphasize the fact that Smith used the phrase “invisible hand” only once in each of his two major works, The Theory of Moral Sentiments (1759) and The Wealth of Nations (1776). The references are so sparse that commentators seldom mentioned the expression by name in the 19th century. No notice was made of it during the celebrations of the centenary of The Wealth of Nations in 1876. In the 18th and 19th century, no subject index, including the well-known volume edited by Edwin Cannan, published in 1904, lists “invisible hand” as a separate entry. It was finally added to the subject index in 1937 by Max Lerner for the Modern Library edition. Clearly, it wasn’t until the 20th century that the invisible hand became a popular symbol of laissez faire.
Invisible Hand: Marginal or Central Concept?
Could the detractors be correct in their assessment of Adam Smith’s sentiments? Is the invisible hand metaphor central or marginal to Adam Smith’s “system of natural liberty”?
Milton Friedman refers to Adam Smith’s symbol as a “key insight” into the cooperative, self-regulating “power of the market to produce our food, our clothing, our housing…without central direction.” George Stigler calls it the “crown jewel” of The Wealth of Nations and “the most important substantive proposition in all of economics.” The idea that laissez faire leads to the common good is called “the first fundamental theorem of welfare economics” by Kenneth Arrow, Paul Samuelson, and Ronald Coase.
On the other hand, Gavin Kennedy contended in earlier writings that the invisible hand is nothing more than an after-thought, a “casual metaphor” with limited value. Emma Rothschild even goes so far as to declare, “What I will suggest is that Smith did not especially esteem the invisible hand…It is un-Smithian and unimportant to his theory” and was nothing more than a “mildly ironic joke.”
Adam Smith Reveals His Invisible Hand
A fascinating discovery uncovered by Daniel Klein, professor of economics at George Mason University, may shed light on this debate. Based on a brief remark by Peter Minowitz that the “invisible hand” phrase lies roughly in the middle of both The Wealth of Nations and The Theory of Moral Sentiments, Klein made preliminary investigations that led him to suggest deliberate centrality. Klein then recruited Brandon Lucas, then a doctoral student at George Mason, to investigate further. Klein and Lucas found considerable evidence that Smith “deliberately placed ‘led by an invisible hand’ at the centre of his tomes” and that the concept “holds special and positive significance in Smith’s thought.”
Klein and Lucas base their conjecture on two major points. First, the physical location of the metaphor: The single expression “led by an invisible hand” occurs almost dead center in the first and second editions of The Wealth of Nations. (It moves slightly away from the middle after an index and additions were added to later editions.)
Moreover, it appears again “well-nigh dead centre” in the final edition of The Theory of Moral Sentiments. Klein and Lucas admit that it was not in the middle of the first edition in 1759, speculating that “physical centrality was not initially a part of his intentions…[but that] by 1776, Smith had become intent on centrality.” Indeed, Smith moved the phrase “invisible hand” closer to the center of the book, first by appending an important essay on the origin of language and finally by making substantial revisions in the final edition.
Second, Klein and Lucas note that as an historian and moral philosopher, Adam Smith commented frequently on the importance of middleness in architecture, literature, science, and philosophy. For example:
— Smith wrote sympathetically about the Aristotelian golden mean, the idea that virtue exists “between two opposite vices.” For instance, between the two extremes of cowardice and recklessness lies the central virtue of courage.
— In Smith’s essays on astronomy and ancient physics, Smith was captivated by Newtonian central forces and periodical revolutions.
— Klein discovered that Smith, in his lectures on rhetoric, admired the poetry of the Greek poet Thycydides, who “often expresses all that he labours so much in a word or two, sometimes placed in the middle of the narration.”
Midpoint analysis and centralized themes existed long before Adam Smith’s time. For example, the Talmud offers considerable commentary about midpoints in the Torah, especially in a poetic form called Chiasmus. Chiasmus is characterized by introverted parallelism, and found in Greek, Latin, Hebrew and Christian literature. A Chiasmus is a pattern of words or ideas stated once and then stated again but in reverse order. Classic examples are found in the Bible: “Who sheds the blood of a man, by a man shall his blood be shed…” (Genesis 9:6), or “The first shall be last and the last shall be first…” (Matthew 19:30).
Most Chiasmi have a “climactic centrality,” that is, the structure of the poem points to a central theme in the middle. For instance, the Psalmist writes, “Our soul is escaped as a bird out of the snare of the fowlers; the snare is broken, and we are escaped.” (Psalms 124:7) Here the Psalmist is urging us (the soul) to escape the clutches of Satan, even as a bird escapes the snare of the fowler or the hunter (the central word).
The standard pattern of a centralized Chiamus is:
C (central theme or focal point)
In sum, according to Klein and Lucas, the invisible hand represents the climatic centrality of Smith’s “system of natural liberty,” and is appropriately found in the middle of his works. By this discovery, if true, one goes from one extreme to the other — from seeing the invisible hand as a marginal concept to accepting it as the touchstone of his philosophy.
Klein and Lucas’s list of evidence is what a lawyer might call circumstantial, or “impressionistic,” to use Klein and Lucas’s own adjective. Taken as a whole, the documentation is either an ingenious breakthrough or a “remarkable coincidence,” to quote Gavin Kennedy.
A few Smithian experts have warmed up to Klein and Lucas’s claim. Gavin Kennedy, who previously considered the invisible hand a “casual” metaphor, now sees a “high probability” in their thesis of deliberate centrality. Others are more skeptical. “We have no direct evidence for the conjecture,” states Craig Smith, an expert on Adam Smith at the University of St. Andrews. The idea that Adam Smith deliberately hid his favorite symbol of his philosophy “strikes me…as very un-Smithian,” he states, and runs contrary to his policy of expressing thoughts in a “neat, plain and clever manner.” Placing the shorthand phrase “invisible hand” in the middle of his works may not be plain, but is it not neat and clever?
We may never know the truth, since we have no record of Smith commenting on the matter. Fortunately, one does not need to depend on the physical centrality of the “invisible hand” to recognize the doctrinal centrality of his philosophy. As Craig Smith states, “I’m not convinced that Smith deliberately placed the invisible hand at the centre of his books, but I am certain that it lies at the heart of his thinking.”
The Significance of the Invisible Hand Doctrine
There are many passages from the Wealth of Nations and the Theory of Moral Sentiments that elucidate the theme of “invisible hand,” the idea that individuals acting in their own self-interest unwittingly benefit the public weal, or that eliminating restrictions on individuals’ behaviors “better their own condition” and make society better off. Smith repeatedly advocates removal of trade barriers, state-granted privileges, and employment regulations so that entrepreneurs and enterprises can flourish.
The invisible hand metaphor is an example of Smith’s law of unintended consequences.
Very early in The Theory of Moral Sentiments, Smith makes his first statement of this doctrine:
“The ancient stoics were of the opinion, that as the world was governed by the all-ruling providence of a wise, powerful, and good God, every single event ought to be regarded, as making a necessary part of the plan of the universe, and as tending to promote the general order and happiness of the whole: that the vices and follies of mankind, therefore, made as necessary part of this plan as their wisdom and their virtue; and by that eternal art which educes good from ill, were made to tend equally to the prosperity and perfection of the great system of nature.”
Or this statement:
“The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.”
Thus, we see how Smith’s argument is comparative. To quote Klein:
“Hewing to the liberty principle generally works out betterthan not doing so—in this respect, Arrow, Stiglitz, and Hahn do disfigure Smith when they identify the invisible hand with some rarified perfection. We need not rehearse Smith on the ignorance, folly, and presumption of political power, on the corruption and pathology of political ecology….Smith sees the liberty principle as a moral, cultural, and political focal point, a worthy and workable principle in the otherwise dreadful fog of interventionism.”
To think that Adam Smith, the renowned absent minded professor, hid a little “invisible” secret in his tomes is indeed the ultimate irony. As Klein concludes, “That the phrase appears close to the center, and but once, in TMS and in WN might be taken as evidence that Smith did intend for us to take up the phrase.”
I find Professor Klein’s story compelling, and have enjoyed showing copies of Smith’s works with a bookmark in the key passages, to students, faculty and interested friends. It has, in the words of Robert Nozick, “a certain lovely quality.”
Will the Real Adam Smith Please Stand Up: My Own Odyssey
In this paper, I’ve discussed the controversies surrounding Adam Smith and the meaning and significance of his invisible hand. As an economist sympathetic with the Austrian school, I myself have gone through an odyssey in my attitude toward Adam Smith. When I first started writing my history, The Making of Modern Economics in the late 1990s, I was still quite infatuated with everything Rothbardian, including his critique of Adam Smith. In fact, I was the one who commissioned Murray Rothbard to write his history of thought in 1980, and like everyone else, was surprised by his attack on Adam Smith. It was a shocking indictment of the Scottish philosopher celebrated by almost all free-market economists, including Rothbard’s teacher Ludwig von Mises.
At that time, I had to decide, who was right, Rothbard or Mises? There was only one way to find out. I decided to read the entire 1,000-page Wealth of Nations, page by page and cover to cover, and come to my own conclusion. Two months later, I put the book down and said to myself: “Murray Rothbard is wrong and Mises is right.” Adam Smith has written a grand defense of the invisible hand and economic liberalism. I followed up by reading Smith’s other great work, The Theory of Moral Sentiments (1759), which reinforced my positive view of Smith.
My change of heart completely transformed my history. Suddenly, The Making of Modern Economics had a plot, an heroic figure, and a bold storyline. Adam Smith and his “system of natural liberty” became the focal point from which all economists could be judged, either adding to or distracting from his system of natural liberty. After coming under attack by socialists, Marxists and Keynesians, the invisible-hand model of Adam Smith was often left for dead but inevitably was revived, revised and improved upon by the French, Austrian, British, and Chicago schools, and ultimately triumphed with the collapse of the socialist central planning model in the early 1990s (although it is again being tested by the ongoing financial crisis).
Granted, Smith made numerous mistakes in his classic work, such as his crude labor theory of value, his attack on landlords, and his failure to recognize marginal subjective values, but French, British, Austrian and Chicago economists have done a great job improving upon the House that Adam Smith Built without destroying his fundamental system of natural liberty, and his policy prescriptions, which were largely libertarian (the classical model of limited government, free trade, balanced budgets, and sound money).
I noticed that Murray Rothbard largely ignored the strong libertarian language found in The Wealth of Nations and overemphasized marginal statements by Smith that were pro-government or anti-market. His attack on Smith reminds me of free-market critics who take the same parenthetical statements in Smith’s writings and make him into some kind of social democrat. Both are wrong.
Here are just a few samples of Smith’s strong libertarian voice in The Wealth of Nations (Modern Edition, 1965 ):
“Every man, as long as he does not violate the laws of justice, is left perfectly free to pursue his own interest in his own way, and to bring both his industry and capital into competition with those of any other man, or order of men.” (p. 651, emphasis added).
“To prohibit a great people…from making all that they can of every part of their own produce, or from employing their stock and industry in the way that they judge most advantageous to themselves, is a manifest violation of the most sacred rights of mankind.” (p. 549)
“Little else is requisite to carry a state to the highest degree of opulence from the lowest barbarism but peace, easy taxes, and a tolerable administration of justice; all the rest being brought about by the natural course of things. All governments which thwart the natural course are unnatural, and to support themselves, are obliged to be oppressive and tyrannical.”
In sum, Mises, Hayek, Friedman and Stigler all had the right attitude when it came to Adam Smith. He established the “keystone” of the market economy.
George Stigler, “The Successes and Failures of Professor Smith,” Journal of Political Economy 84:6 (December, 1976), p. 1201. Emphasis added.
Milton Friedman, quoted in Fred R. Glahe, ed., Adam Smith and the Wealth of Nations: 1776-1976 Bicentennial Essays (Colorado Associated University Press, 1978), p. 7.
Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations (Liberty Fund, 1981 ), p. 456.
Milton and Rose Friedman, Free to Choose (Harcourt Brace Jovanovich, 1980), pp. 13-14.
See chapter 9, “Faith and Reason in Capitalism,” by Mark Skousen, Vienna and Chicago, Friends or Foes? (Regnery, 2005) for a variety of comments, both positive and negative, about the invisible hand.
George J. Stigler, “The Successes and Failures of Professor Smith,” Journal of Political Economy 84:6 (December, 1976), p. 1201. See Stigler’s quotation at the beginning of this paper.
George J. Stigler, “Introduction,” Selections from the Wealth of Nations(Appleton-Century-Crofts, 1957), pp. vii-viii.
Ludwig von Mises, “Why Read Adam Smith Today,” in Adam Smith, The Wealth of Nations (Regnery, 1998), pp. xi-xiii.
Friedrich Hayek, The Trend of Economic Thinking: Essays on Political Economists and Economic History, The Collected Works of F. A. Hayek (University of Chicago Press, 1991), pp. 119, 121.
Iain McLean, Adam Smith: Radical and Egalitarian (Edinburgh University Press, 2006), pp. 91, 120, passim, and Samuel Fleischaker, On Adam Smith’s Wealth of Nations: A Philosophical Companion (Princeton University Press, 2005).
See especially Spencer J. Pack, Capitalism as a Moral System: Adam Smith’s Critique of Free Market Economy (Edward Elgar, 1991).
William J. Baumol and Alan S. Blinder, Economics: Principles and Policies, 8th ed. (Harcourt College Publishers, 2001), p. 214.
Frank Hahn, “Reflections on the Invisible Hand,” Lloyds Bank Review (April, 1982), pp. 1, 4, 8.
See Emma Rothschild, Economic Sentiments: Adam Smith, Condorcet, and the Enlightenment (Harvard University Press, 2001), p. 119; John Roemer, Free to Lose (Harvard University Press, 1988), p. 2-3; and Frank Hahn, “Reflections on the Invisible Hand,” Lloyds Bank Review (April, 1982).
Alan B. Krueger, “Introduction,” The Wealth of Nations (Bantham, 2003), p. xxiii. Krueger’s recommended reading list includes works of Robert Heilbroner and Emma Rothschild, and a brief reference to an article by George Stigler.
Murray N. Rothbard, Economic Thought Before Adam Smith (Edward Elgar, 1995), pp. 435-436, 448, 451, 452, and 458. Even radical economist Spencer Pack considers his attack on Smith “unduly severe” and “one of the harshest attacks every made upon Smith’s work by a non-Marxist (or indeed any) economist.” See “Murray Rothbard’s Adam Smith,” Quarterly Journal of Austrian Economics 1:1 (1998), pp. 73-79.
Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations(Liberty Fund, 1981 ), p. 456.
Iaian McLean, Adam Smith: Radical and Egalitarian, pp. 53, 82.
Milton Friedman, “Adam Smith’s Relevance for 1976,” in Fred R. Glahe, ed., Adam Smith and the Wealth of Nations: 1776-1976 Bicentennial Essays (Colorado Associated University Press, 1978), p. 17.
George Stigler, “Successes and Failures of Professor Smith,” p. 1201.
Mark Skousen, The Making of Modern Economics, 2nd ed. (ME Sharpe, 2009), p. 219.
Gavin Smith, “Adam Smith and the Invisible Hand: From Metaphor to Myth,” Econ Journal Watch 6:2 (2009), p. 240.
Emma Rothschild, Economic Sentiments: Adam Smith, Condorcet, and the Enlightenment (Harvard University Press, 2001), pp. 116, 137.
Peter Minowitz, “Adam Smith’s Invisible Hands,” Econ Journal Watch 1:3 (2004), p. 404.
Daniel B. Klein, “In Adam Smith’s Invisible Hands: Comment on Gavin Kennedy,” Econ Journal Watch 6 (2), (May 2009), pp. 264-279.
Daniel B. Klein and Brandon Lucas, “In a Word or Two, Placed in the Middle: The Invisible Hand in Smith’s Tomes,” Economic Affairs (Institute of Economic Affairs, March 2011), pp. 43, 50.
The modern Glasgow edition published by Oxford University Press and reprinted by Liberty Fund does not include the language essay, so “led by an invisible hand” is not dead-center. However, The Theory of Moral Sentiments published by Richard Griffin & Co. in 1854, and reprinted by Prometheus Books in 2000, does contain the language essay, and “invisible hand” appears on page 264, within five pages of the center (269).
Adam Smith, Lectures on Rhetoric and Belles Lettres (Liberty Fund, 1985), p. 95.
Gavin Kennedy, “Adam Smith and the Role of the Metaphor of an Invisible Hand,” Economic Affairs (March 2011), p. 53. See also Gavin Smith, “Adam Smith and the Invisible Hand: From Metaphor to Myth,” Econ Journal Watch 6:2 (May 2009), pp. 239-263.
Gavin Kennedy, “Adam Smith and the Role of the Metaphor of an Invisible Hand,” op. cit., p. 54.
Craig Smith, “A Comment on the Centrality of the Invisible Hand,” Economic Affairs (March 2011), p. 58.
Craig Smith, op. cit., p. 59. Ryan Hanley (Marquette University) expresses “considerable uneasiness” about Klein’s thesis and is “not yet convinced.” See “Another Comment on the Centrality of the Invisible Hand,” Economic Affairs (March 2011), pp. 60-61.
Adam Smith, The Wealth of Nations, p. 341.
Adam Smith, The Theory of Moral Sentiments (Liberty Fund, 1982 ), p. 36. For a discussion of the invisible hand as a religious symbol of the “invisible God,” and the four levels of faith in capitalism, see chapter 9 in Mark Skousen, Vienna and Chicago, Friends or Foes? (Capital Press, 2005).
Adam Smith, The Theory of Moral Sentiments, p. 234.
Daniel B. Klein, “In Adam Smith’s Invisible Hands: Comment on Gavin Kennedy,” Econ Journal Watch 6:2 (May 2009), p. 275.
Ibid., p. 277.
Robert Nozick, Anarchy, State, and Utopia (Basil Blackwell, 1974), p. 18.
Duguld Stewart, Biographical Memoirs of Adam Smith (1793). | <urn:uuid:b8f440a0-520d-48ab-b706-10944b1ae4fb> | CC-MAIN-2017-26 | http://mskousen.com/2012/01/the-centrality-of-the-invisible-hand/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00670.warc.gz | en | 0.936701 | 6,892 | 2.609375 | 3 |
During the Cold War, Massachusetts Institute of Technology scientists produced ideas and inventions, such as distant early-warning radar and satellite-tracking systems, to help the United States prevail over the Soviet Union. Today, MIT is working with the Russians, not against them.
Just 12 miles from the Kremlin, rising from a field once used for agricultural experiments, the Skolkovo Institute of Science and Technology will have a curriculum designed by MIT and financial backing from Russia’s government.
The school — nicknamed Skoltech — will offer graduate degrees only and teach in English, serving as the centerpiece of a $2.7 billion innovation hub. Russian officials say they aim to create tech start-ups and lure corporate research laboratories with tax breaks and relaxed visas and customs regulations. IBM Corp., Microsoft Corp., and Siemens AG have already agreed to locate there.
‘‘Russia has beautiful ideas but very poor commercialization,’’ said Viktor Vekselberg, the billionaire president of the Skolkovo Foundation, which is developing Skoltech.
Vekselberg earned a PhD in mathematics at the USSR Academy of Sciences before amassing a fortune in the oil and energy sectors that the Bloomberg Billionaires Index valued at $14.6 billion on April 15. ‘‘We are very concerned that Russia today is not able to create a serious pipeline of innovative projects,’’ he said.
The foundation says it has recruited 52 venture capital firms to the Skolkovo Innovation Centre, founded in 2010.
MIT, which already has programs in Abu Dhabi, China, Portugal, and Singapore, sees advantages as well. Skolkovo will give it access to the most promising scientists in a country where it has had little contact, said Leo Rafael Reif, MIT’s president.
‘MIT is lending legitimacy and a cloud of respectability to an undemocratic regime.’
‘‘There is a tremendous amount of talent there,’’ Reif said. ‘‘It is really an incubator.’’
MIT is one of scores of US schools expanding around the globe. There were 83 international branch campuses of US universities as of March, not including partnerships such as MIT and Skolkovo’s, according to GlobalHigherEd.org, a website run by researchers at the State University of New York. That number has climbed from 10 in 1990, said Jason Lane, a SUNY Albany professor.
The result may be a higher-education bubble — with too few qualified or interested students — in regions such as the Middle East and China, said Philip G. Altbach, director of the Center for International Higher Education at Boston College. He cited Michigan State University, which closed its undergraduate campus in Dubai in 2010 after failing to attract enough students. (It reopened with graduate programs in 2011.)
‘‘The US universities involved have come out with significant egg on their face,’’ he said.
There can also be pitfalls in countries that have different concepts of political and academic freedom. Yale University’s joint venture with the National University of Singapore, where classes are scheduled to start in August, led angry Yale faculty to pass a resolution urging the school to respect civil liberties.
Singapore’s government censors the media and uses the courts to silence criticism of the regime, according to Human Rights Watch. Yale-NUS has adopted policies of non-discrimination consistent with Yale’s and will protect freedom of expression, the college said in an e-mailed statement.
At Johns Hopkins University’s 27-year-old venture with Nanjing University, police monitor Internet use, said Jan Kiely, a former co-director of the campus.
Under President Vladimir Putin, Russia’s government has cracked down on critics, ranging from newspaper owner Alexander Lebedev to the punk rock group Pussy Riot.
Professors are also in jeopardy. Since 1998, more than a dozen Russian scientists have been arrested, most of them engaged in collaborations with foreign academics, said Igor Sutyagin, a London-based defense analyst and former researcher at Moscow’s Institute for the USA and Canada Studies who himself was jailed for 11 years. That should give MIT pause, he said.
‘‘They should know they risk their own people and they put in danger the Russians who work with them,’’ said Sutyagin, who said he was arrested for passing material about the Russian military that was in the public domain to a British firm that was accused of being a cover for US intelligence services.
He eventually signed a confession in order to be included in an exchange of spies and was released in 2010.
Vladimir Kara-Murza, a member of the Coordinating Council of the Russian Opposition and a Putin critic, said Skoltech merely serves as propaganda.
‘‘MIT is lending legitimacy and a cloud of respectability to an undemocratic regime,’’ Kara-Murza said. ‘‘They should fully understand what they are supporting and what they are doing.’’
While Russia still produces skilled graduates in math and science, its reputation for world-class research is poor. Until recently, faculty were rewarded for publishing in journals sponsored by their universities instead of international peer-reviewed publications, said Harley Balzer, a Georgetown University professor.
MIT’s involvement helped convince Yuri Shprits, a Russian-born geophysics researcher, to leave UCLA for a job at Skoltech.
‘‘The fact that MIT was behind this, that I found MIT faculty actively involved, is what gave me confidence,’’ said Shprits, a naturalized US citizen who studies the effects of the earth’s radiation belts on satellites. ‘‘It’s clear that it will be a top-ranked graduate school in Russia and we will be able to select the best graduates from Russian universities.’’
Since its founding in 1861, MIT faculty, staff, and alumni have won 78 Nobel Prizes. It’s one of the world’s richest universities, with an endowment worth more than $10 billion as of June 30.
When fully staffed, the school in Russia will teach more than 1,200 graduate students. A dozen of Skoltech’s first 20 students are spending a year at MIT in Cambridge while they wait for the Moscow campus to be completed. | <urn:uuid:0e23e569-9e79-4534-ab87-2eddb667b1b4> | CC-MAIN-2015-18 | http://www.bostonglobe.com/business/2013/05/05/mit-moscow-creates-sputnik-moments-with-putin/itaVybGIH2Cf8l3dOmaSAL/story.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661916.33/warc/CC-MAIN-20150417045741-00300-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.960057 | 1,344 | 2.65625 | 3 |
or camel case
, frequently applied to the term itself and written CamelCase
, is the capitalization
of more than one word within a compound word or multi-word symbolic name. This is also known as BumpyCase
, and WikiWord
Originating with a naming convention in the C programming language, it spread via hacker culture into mainstream use and became fashionable for corporate trade names during the popularization of the personal computer in the 1980s and 1990s. It is file name convention on Amiga computers. In the original version of the WikiWiki software, CamelCase is used for automatically making links.
The following do not strictly qualify as bicapitalization, but are CamelCase for the purposes of the original version of the WikiWiki
- AlabamA (CamelCased words need at least two components)
- aNaRcHy cAsE
C and many later programming languages
are case sensitive
and allow symbolic names of arbitrary length, while not allowing these to contain whitespace
(spaces, tabs, etc.), which can make longer symbols harder to read. One solution to this problem is to replace spaces with underscores, but these are difficult to type due to their location on the keyboard. The easier-to-type hyphen is already an operator in most programming languages.
Another solution is to use lower-case letters for most of the symbol, with upper-case at the starts of separate words. For instance, if one wanted to name a symbol "the colour of the bar", this would typically be camel cased as "TheColourOfTheBar". In lower camel case it would be "theColourOfTheBar". Coding standards, which many software developers adhere to, outline preferred uses of CamelCase for method names, [fields (computer science)|[fields] and properties.
CamelCase synonyms include:
Link: " class="external">http://c2.com/cgi/wiki?CamelCase
CamelCase and Wiki
CamelCase is also the original wiki
convention for creating hyperlinks
, with the additional requirement that the capitals are followed by a lower-case letter, hence AlabamA and ABc will not be links, see http://c2.com/cgi/wiki?WikiCase.
CamelCasedTerms are not useful for search engine spidering and indexing, as search engines cannot rank links based on individual words in the url describing that link. Having a word in the url generally rates a page as related to that word. Separating words out individually (by placing hyphens between words in local paths or in dns names; the underscore is not a valid character for dns names) addresses this. Removing case sensitivity from links also allows use of tools such as Apache's mod_speling, allowing easier guessing of urls by humans.
CamelCase and Wikipedia
Wikipedia started requiring CamelCase for links, but soon enabled and recommended free links, by putting [[square brackets]] around phrases to be linked, and a year later the automatic linking of CamelCase was disabled:
On January 27, 2001, Clifford Adams, author of the original UseMod software used for Wikipedia, posted the following to the Wikipedia mailing list:
- I've done a lot of thinking about WikiLinking recently, and I'm not sure that the WikiName (capital letters) convention is a good fit for the encyclopedia. The AccidentalLinking is a nice feature, but it has a price in harder-to-read links and confusing conventions.
- For instance, when I recently wanted to link to "democracy", I first did a search to see if someone else had linked the name (I thought someone might have already used "DemoCracy"). I found that nobody else had linked that name, so I made the link "DemocracY" (to follow the new convention of last-letter-capitalized). In short, it took me far more time to make that link than it would have to just type [[democracy]]. Someone unfamiliar with the local wiki conventions might guess otherwise on another page and link to a separate "DemoCracy" or even "DeMocracy". Ick.
- To make a longish story short, I added code (about 150 new lines of Perl) to my development copy to allow (site-optional) "Free" linking within [[double brackets]]. You can use spaces, numbers, commas, dashes, and the period character in these kinds of links. Valid link names include [[George W. Bush]], [[China-Soviet Relations]], [[Physics]], [[music]], and [[Year 2000 bug]]. User names can also use these new links. Internally and within URLs the spaces are replaced with _ (underline) characters, which are translated back to spaces for display purposes.
Later, with the introduction of the new Wikipedia software in January 2002, support for CamelCase links was dropped altogether. By this time almost all CamelCase links in articles had been removed anyway. CamelCase can still be found in the non-encyclopedia parts of Wikipedia, such as Talk pages, where the links have not been updated. Many Wikipedians have CamelCased user names, either as a leftover from the early days, or carried over from other wikis. | <urn:uuid:4786df3c-0192-4b74-9fe7-7b2ab6d61be3> | CC-MAIN-2018-47 | http://www.fact-index.com/c/ca/camelcase.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113062854-00005.warc.gz | en | 0.927535 | 1,104 | 3.484375 | 3 |
Technology, career pathways and the gender pay gap
Women in Science, Technology, Engineering and Mathematics (STEM)
Although the gender pay gap is closing incrementally, pay parity between men and women in the UK is not forecast to be achieved until 2069. Significantly, the gap in starting salary between men and women who have studied STEM subjects and go on to take jobs in those spheres is smaller than in any other subjects studied.
Technology and the changing jobs market
Our analysis of employment data from the last 15 years alongside nearly three million university records finds that women make up just 14.4 per cent of individuals working in STEM occupations in the UK with as many as 70 per cent of women with STEM qualifications not working in relevant industries. Women are more likely than men to pursue studies - and subsequently take up employment - in caring or teaching roles.
Although these roles are less well paid than technical and commercial roles, they do place greater importance on cognitive and social skills, which we know from other Deloitte research, are essential for workers to remain adaptable and employable in the future.
Our key findings
- Although the gender pay gap is closing steadily, we forecast that at the current rate of convergence, pay parity will not be achieved until 2069
- Overall, almost as many girls as boys sat GCSEs in STEM subjects this year, outperforming them in every subject except maths
- At A-Level in 2016, 40 per cent more boys than girls took STEM subjects. However, girls continued to outperform boys in every STEM subject
- Many top-paid jobs increasingly call for ability in STEM subjects
- Research shows that in the past 15 years, both men and women have benefited from technology-driven changes in the labour market. Moreover, the impact of technology on jobs undertaken by men and women is fairly balanced.
This clear divide in the skills between genders needs to be addressed so that all students - whether male or female and at all stages of their education - are provided with an equal foundations upon which they can build the career of their choice.
Key actions for businesses
Tackling the gender pay gap, and its root causes, depends upon strengthening the engagement that already exists between businesses, educators and policymakers. In particular, businesses have to take a greater role in helping to reduce the engrained differences in the skills that women gain and develop.
Our recommendations for businesses:
- Provide educators and policymakers with practical careers insight
- Provide more support for women returning to work
- Publish detailed information on the gender pay gap. | <urn:uuid:4e21cfd8-18a1-401d-91ac-99b917c33494> | CC-MAIN-2017-13 | https://www2.deloitte.com/uk/en/pages/growth/articles/technology-career-pathways-gender-pay-gap.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00152-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.96064 | 519 | 3.03125 | 3 |
1.An overview of the dental pulp - Australian Dental Association
consequence of pulp disease is oral sepsis, which can be ..... of the pulp space since bacteria will have a pathway by which they ..... In: Cohen S, Burns RC, eds.
2.Identify and Define All Diagnostic Terms for Pulpal - American ...
Hamilton, Ontario, Canada, 2008; Pathways of the Pulp, 9th ed, Cohen S, Hargreaves ..... establishes a pathway for drainage of pulpal inflammatory exudates.
3.The C-shaped Root Canal Configuration: A Review - College of ...
Instead of having several discrete orifices, the pulp chamber of the. C-shaped canal is a single .... appeared to continue on its own pathway to the apex (Fig. 4C ).
PIIS0099239907000052 INNATE IMMUNE RESPONSES OF THE DENTAL PULP TO CARIES
4.Root canal morphology and its relationship to endodontic procedures
shaping and cleaning of all pulp spaces and its complete obturation with an inert ..... pathway of the MB-2 canal in maxillary first and second molars using the ...
5.endodontic access preparation the tools for success - Advanced ...
Clinicians typically access the pulp chamber through an existing restoration if it is judged ... Note the pathway of the access through the crown to the underlying root canal system. ..... Pathways of the Pulp, 8th ed., Cohen S, Burns RC, eds., St.
6.the protaper advantage - Advanced Endodontics
so that there is a straightline pathway to the orifice(s). The pulp chamber should be ..... In Cohen S, Burns RC, editors: Pathways of the Pulp, pp. 231-291, 8th ed., ...
7.Journal of International Dental and Medical Research
Dec 5, 2011 ... conditions of the pulp. Dental Pract Dent Rec 1970; 20: 333-36. 8. Cohen S, Hagreaves KM. Pathway of the pulp, 9th Ed. St Louis;. Mosby 2006 ...
8.Molecular Markers of Dental Pulp Tissue during Orthodontic Tooth ...
Jan 30, 2012 ... appliance) dental pulp tissue by using GeneFishing technique as compared to lower first ... KEGG pathway database homepage (http://www.genome.jp/ .... S. Cohen and K. M. Hagreaves, Pathways of the Pulp, Mosby, ... | <urn:uuid:bb1cd98a-1871-4e27-94f3-cb07f7272ae1> | CC-MAIN-2014-15 | http://findpdf.net/documents/cohens-pathway-of-the-pulp-pathway-of-the-pulp.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00398-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.798579 | 542 | 2.65625 | 3 |
Answered You can hire a professional tutor to get the answer.
Write a 8 pages paper on see order instructions. Everyday police officers face different challenges and experiences that expect them to make decisions on how to handle the situation individually witho
Write a 8 pages paper on see order instructions. Everyday police officers face different challenges and experiences that expect them to make decisions on how to handle the situation individually without involving additional advice or immediate supervision, and this is the heart of police discretion. In law enforcement, Hassell and Archbold (2010) argue that the police officer has the mandate to make judgments or reasonable decisions within certain legal bounds. Police officers face a wide range of options especially when confronted by dangerous situations. Some of their decisions have been misconstrued as misconduct and a good example is the use of excessive force. External and internal mechanisms affecting police discretion involve the lack of agreement on the exact criminal behaviors that law officers should use in discretion. As a result, there is no evident legal discretion of the criminal actions requiring discretion. However, there are control mechanisms including Internal and external control mechanisms, control by citizens, legislative controls, and control by courts.
A study by Palmiotto and Unnithan (2011) posits that more attention remains on the need to prepare police officers for the appropriate use of discretion. These preparations begin at the training school in the academy continuing later to their field practice. According to the trainings, the use of discretion is critical mainly after an event or on regular basis. | <urn:uuid:523114e3-fe14-43f4-a12b-9606219a9645> | CC-MAIN-2020-45 | https://studydaddy.com/question/write-a-8-pages-paper-on-see-order-instructions-everyday-police-officers-face-di | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00498.warc.gz | en | 0.949059 | 305 | 3.109375 | 3 |
We are an art-obsessed species: It exists in every human culture, and most people devote some part of their lives to making or experiencing some form of art. But why do pictures, music, stories, and the like matter so much to us? Why should we put so much effort into pursuits that often have few if any obvious material rewards? Why are we able to become emotionally involved with narratives about imaginary people, and why do we seek out this form of emotional engagement?
One way to try to answer such questions is to provide an evolutionary account, in which facts about our art-related behavior are explained by the work of Darwin and his later followers. Such behaviors might be held to have provided our ancestors with reproductive advantages, or to be related in some way or other to behaviors that did.
In “The Artful Species,’’ philosopher Stephen Davies surveys and assesses a wide range of possible evolutionary accounts. One of his goals is to caution us against accepting such theories too quickly. It might seem obvious, for instance, that displays of musical skill would be advantageous in evolutionary terms, as they can help performers attract potential mates. (Why else would so many teenage males spend money on guitars?) But as Davies points out, the empirical data do not clearly support such an effect. (In the Western world, major composers have actually tended to have fewer offspring rather than more.)
Moreover, the way music has functioned in many societies bears little resemblance to the pop-star model we are familiar with. Even if, in today’s world, musicians have a reproductive advantage, it’s likely that this has not been true widely or long enough to have substantially affected human evolution.
Davies’s general tendency, then, is to rein in the more ambitious claims of theorists who think that art can be straightforwardly explained as an evolutionary adaptation: “Some, but not all, aesthetic interests and responses have biological underpinnings. To that extent those responses reflect our shared human nature. But when it comes to arguments claiming that art is an evolutionary adaptation, we should be more cautious . . . I recognize the tantalizing appeal and plausibility of claiming art as a central aspect of our common biological inheritance. But making the connection depends ultimately on a leap of faith, rather than on appeal to incontrovertible scientific fact.”
This is not to say that those who hold some other theory — that art-related behaviors are by-products, say, or that they have no evolutionary significance at all — are necessarily correct. Rather, Davies argues, we don’t yet know who is right. There just isn’t enough evidence available to settle the question.
“The Artful Species’’ is comprehensive, well-organized, and cogently argued, and, if it comes across as slightly unexciting, its caution and intellectual modesty are welcome in a body of literature that all too often encourages and rewards unrestrained speculation. Also, Davies has interesting things to say about topics that have been largely neglected (the aesthetic interest humans take in animals) and is capable of taking a fresh approach to topics that have already been discussed at great length (the nature of human beauty and attractiveness, for instance).
Michael Trimble’s “Why Humans Like to Cry’’ takes a narrower focus. Trimble is interested, specifically, in the question of emotional crying, especially displays provoked by exposure to works of art. Some other animals shed tears when physically injured, but emotional tears seem to be unique to us, and it is not immediately clear what, if any, biological function they might serve.
The early chapters of the book, in which Trimble lays out the issues, are fairly interesting. He draws on a number of studies regarding the circumstances under which people cry, and briefly explores a few philosophical accounts of tragedy and related matters.
From Nietzsche he borrows the notion of the interplay of the artistic duality represented by the Greek gods Apollo (reason, epic poetry, sculpture) and Dionysus (dance, lyric poetry, melody): “With the added twist of neuroscience, these much discussed images may be seen in something of a new light, as metaphors for psychological processes based in neuroanatomical and evolutionary principles.”
He also agrees with Nietzsche on the special significance of music (“One conclusion,” he writes, “is that music, above all the arts, is simply special, with effects on us above and beyond the other arts”), while taking issue with Aristotle’s idea that the effect of tragedy is to produce a catharsis in the audience, a kind of purging of the emotions.
Not all of these suggestions, however, are entirely clear; and unfortunately, the nature of Trimble’s account becomes more obscure, not less, as the book progresses. In particular, it never becomes clear what the “added twist of neuroscience” amounts to. Neuroscientists can identify which particular parts of the brain are involved in which particular processes or reactions, but while this information may provide an immediate, mechanical account of what happens when someone cries, it generally leaves the truly interesting questions — why did humans evolve to have emotional reactions in these circumstances, and why this particular emotional reaction? — mostly untouched.
Trimble does make a few gestures in the direction of explanation, involving altruism, mirror neurons, and the like. But in the end it is not clear how to fit these puzzle pieces together, or why the human ability to feel compassion for other (real) people should extend to fictional characters — let alone why music should provoke tears so effectively — or why such tears should be pleasurable. “Why Humans Like To Cry” raises a fascinating question, but Trimble’s book might lead many readers to wonder whether contemporary science is anywhere near being able to answer it. | <urn:uuid:098af314-e9fd-4a67-838c-437f5c96a258> | CC-MAIN-2014-52 | http://www.boston.com/ae/books/2013/01/19/review-why-humans-love-cry-michael-trimble-and-the-artful-species-stephen-davies/1wE2yu0QDGcNScocfQhZmO/story.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768441.42/warc/CC-MAIN-20141217075248-00143-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.954436 | 1,213 | 2.671875 | 3 |
10 Crazy Google X Projects That Could Revolutionize The WorldEntertainment, Lists, Other, Science, Shocking, Technology, Weird
Often compared to the infamous fictional company Skynet, Google is currently in control of our digital lives due to its numerous free products and services, which enrich our online experience. However, what most don’t know is that Google has a rather secretive division called Google X, where they invest in strange and bizarre concepts, which are a true hell for investors, but are able to revolutionize the world we currently live in. From simple notions such as drone delivery and self-driving cars to serious scientific breakthroughs and innovations, including miniature chips in contact lenses, here are the 10 most crazy Google X Projects that could eventually revolutionize the world.
Similar to the drone delivery service offered by Amazon, Google has its own drone delivery system developed through Google X and the so-called Project Wing. While there are still a number of issues with regard to safety and feasibility, Google could revolutionize the eCommerce world once again.
In the next five years Google could present the world with free internet all over the globe via their Google X project Loon. The plan is to put hundreds upon hundreds of balloons in the stratosphere, which will circulate the globe via natural air currents. With testing already underway the perspectives of Project Loon are quite high.
Google Contact Lenses
Google is aiming to make a true revolution in the health industry as well. One of their most famous projects through Google X with regard to health is their contact lenses, which feature miniature chips inside that are able to measure glucose and could prove extremely helpful to diabetes patients.
Lift Labs are most well-known for their innovative product called Liftware, which is basically a spoon that counteracts the effects of Parkinson’s disease and thus helps patients eat more easily. Google acquired the company through Google X and we can only imagine what their combined powers and minds can achieve.
Google X Nanoparticles
Google also makes significant progress in the field of nanoparticles. While there isn’t a lot of public information with regard to what they are actually trying to invent, one of the uses they have in mind for the nanoparticles is to predict disease and improve the human biology and health.
The Google Neural Network
One of the most ambitious projects of Google X is the creation of a computer neural network that is able to act in a similar way to a brain neural network. Currently through machine learning Google has managed to reach a number of milestones and the project is already underway to a point that it is integrated in Google Translate, Google Speech Recognition and Google Image Search.
The Google X Tech Division
You might consider Google Glass to be a total failure, but the truth is that the innovative project was part of Google X and their tech division. Who knows what crazy concept they are currently trying out!
The Self-Driving Car
The self-driving cars currently being tested on the streets are yet another project which is a part of the long list of Google X project. Made public and being showcased as the technology of the future, self-driving cars may be a true revolution, which started in the secretive labs of Google.
Google has the tendency to acquire a lot of small, but really innovative companies and take them under the wing of Google X. Makani Power is one of them. The company used to build wind turbines and integrate them with kites in order to make wind turbines airborne.
The Smartphone Personal Doctor
Making innovations in health analyzing, Google is trying to turn your smartphone into a true personal doctor. While little is known with regards to the project, it is called Baseline Study and it is under the wing of Google X. | <urn:uuid:23e40ad8-075a-4e54-9097-ab0cbb79535f> | CC-MAIN-2022-27 | https://www.lolwot.com/10-crazy-google-x-projects-that-could-revolutionize-the-world/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00379.warc.gz | en | 0.954833 | 779 | 2.546875 | 3 |
If you have loose, watery stools more than three times a day, you have diarrhea, as defined by MedlinePlus, a service of the U.S. National Library of Medicine and National Institutes of Health. Treatment of diarrhea varies and depends on the cause. You can alleviate some of your symptoms, and help prevent dehydration, by altering your diet while you have diarrhea.
Fibrous foods may irritate your bowel and aggravate diarrhea. The Academy of Nutrition and Dietetics advises grain products with less than 2 grams of fiber per serving. Choose bread, bagels, crackers and pasta made from white flour. Cook vegetables thoroughly and eat no-sugar-added canned varieties of fruits for smoother digestion.
High-fat, greasy food can make diarrhea worse. Limit fats like oil, butter, cream and mayonnaise, to 8 teaspoons daily. Avoid nuts and nut butters, hot dogs, sausage, bacon, and fried chicken or fish while you have diarrhea. Choose low-fat or nonfat yogurt, plain breads and crackers, and well-cooked protein foods without added fat.
If you are lactose intolerant, limit milk products as they can exacerbate your diarrhea. Otherwise, enjoy foods with probiotics, which are beneficial bacteria that can shorten the duration and lessen the symptoms of diarrhea. Choose yogurt or kefir with live active cultures, or take a supplement.
You may need to drink more fluids to stay hydrated during bouts of diarrhea. The Academy of Nutrition and Dietetics recommends drinking at least eight to 10 8-ounce cups of caffeine-free, alcohol-free liquid, such as broth, fruit juice and water, daily. Limit beverages sweetened with high-fructose corn syrup or sorbitol. Drink electrolyte solutions to replace important nutrients lost with diarrhea, such as potassium and sodium.
Consume a variety of foods to provide adequate nutrition during mild diarrhea. Choose a low-fiber cereal with soy milk and half a banana for breakfast. Consume a light snack of decaffeinated tea and graham crackers. For lunch, eat chicken and rice soup with cooked carrots, white toast with thinly-spread jelly and applesauce. Another snack could consist of crackers and fruit juice. Have a light dinner of baked chicken, mashed potatoes without skin and cooked green beans. For an evening snack, enjoy a cup of flavored yogurt. Throughout the day, consume allowed beverages to maintain hydration. | <urn:uuid:5f299346-2123-4b40-9c73-10db0d641b40> | CC-MAIN-2017-39 | http://www.livestrong.com/article/275211-food-to-calm-diarrhea/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00237.warc.gz | en | 0.915586 | 499 | 2.984375 | 3 |
For millennia, mirrors have not only served as objects for personal grooming, but also for magic, light, weapons, and even prototype cameras. We’ve come a long way from that first mirror, a still puddle or pool with a dark bottom. Today’s man-made mirrors maintain reflective properties but add more consistent clarity and, unlike a pool of water, they are portable. Over the years, mirrors became prized objects for refreshing our personal appearance.
In the 3rd century BCE, Archimedes of Syracuse weaponized the mirror. He is said to have turned mirrors into heat rays, redirecting the sun to the tarred, flammable wood of Carthaginian ships.
Mirrors are also an important part of Greek and Roman literature. Greek mythology tells of Nemeses luring a proud Narcissus to a pool, knowing the latter would fall in love with his own reflection. The story gave posterity two new words. Later, the Roman writer Ovid added to the story, placing Narcissus at the same pool, “well-deep and silver clear,” disregarding the calls of the beautiful nymph, Echo, standing nearby, unable to turn away from his own reflection1.
Carrying the association between pride and mirrors into the Middle Ages, the Church looked down upon this all-too human activity. Mirrors encouraged self-admiration, luring the most pious into committing one of the seven deadly sins.
This possibility became graver when someone discovered the idea of pouring a molten metal alloy onto the back of glass. First, this looking glass was convex, resulting in distorted images. But when this was done to flat glass, the proportions were right, and the blurry reflection from stone mirrors was wiped into focus.
The inspiration for this piece came after Poor Yorick staff viewed the short film, More Than Just a Mirror, a documentary about the Didcot Mirror, by filmmaker Sharon Woodward. This story is in fact inspired by some recent discoveries made in England. The Didcot Mirror precedes the Middle Ages, and comes long before mirrors of glass. As Dr. Peter Northover points out in the video, the Didcot mirror is made of bronze, decorated on one side and polished on the other to render a shiny, reflective surface2. Although still a luxury item, these types of bronze mirrors were the most common of the day. It would take 1,500 years for the clear glass mirrors to become every day articles, hung on our bathroom walls and stuffed into our purses.
By the time of the Romans, there were organized foundries making bronze objects, including mirrors. According to the British Museum, there were many mirrors of this type at this time in England. The Celts who inhabited the area had developed a particular style of decoration. “Decorated mirrors of this type are uniquely British, very few are made on the continent3.” They are commonly associated with the La Tene, or Celts who, before and during the Roman occupation, created beautiful pieces of art.
Poor Yorick reached out to the filmmaker of More Than Just a Mirror, Ms. Woodward, and the Curator of Archeology at Oxfordshire Museums Services, Mr. David Moon. That interview is coming soon!
1. Horace Gregory, trans., Ovid: The Metamorphoses. (New York: Viking, 1958), 94.
2. Wooodward, Sharon. More Than Just A Mirror. Video. Woodward Media. https://vimeo.com/142651625
3. “Desborough Mirror,” The British Museum, accessed 19 November 2015. http://www.britishmuseum.org/research/collection_online/collection_object_details.aspx?objectId=828309&partId=1&searchText=bronze+mirror+celtic&page=1 | <urn:uuid:08af5358-f59b-437a-994e-765de71c899f> | CC-MAIN-2022-27 | http://pooryorickjournal.com/the-mirrors-eccentric-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00548.warc.gz | en | 0.947487 | 814 | 3.1875 | 3 |
Statistics for Criminology and Criminal Justice
The topics and presentation style of Statistics for Criminology and Criminal Justice are targeted to students who have a basic background in algebra but who have little or no exposure to the study of statistics. The content is presented in a sequential fashion. It begins with descriptive statistics, moves into probability and distributions, and ends with bivariate hypothesis testing and an introduction to regression. Emphasis is placed on balancing thoroughness with ease of understanding and applications in order to show students the importance of statistics in the practice and study of criminal justice and criminology.
- Paperback | 344 pages
- 185.42 x 228.6 x 22.86mm | 566.99g
- 01 May 2012
- SAGE Publications Inc
- Thousand Oaks, United States
About Jacinta M. Gau
Jacinta M. Gau received her Ph.D. from Washington State University in 2008 and is currently Associate Professor in the Department of Criminal Justice at the University of Central Florida, where she teaches, among other topics, doctoral-level quantitative methods and undergraduate research methods. Her research is primarily in policing, with an emphasis on police-community relations, race issues, and procedural justice and police legitimacy. Her published articles have appeared in multiple journals. She is co-author of the book Key Ideas in Criminology and Criminal Justice, published by SAGE, and co-editor of Race and Justice: An International Journal, also published by SAGE.
Table of contents
About the Author Preface Acknowledgements PART I: DESCRIPTIVE STATISTICS Chapter 1. Introduction to the use of Statistics in Criminal Justice and Criminology Chapter 2. Types of Variables and Levels of Measurement Chapter 3. Organizing, Displaying, and Presenting Data Chapter 4. Measures of Central Tendency Chapter 5. Measures of Dispersion PART II. PROBABILITY AND DISTRIBUTIONS Chapter 6. Probability Chapter 7. Population, Sample, and Sampling Distributions Chapter 8. Point Estimates and Confidence Intervals PART III: HYPOTHESIS TESTING Chapter 9. Hypothesis Testing: A Conceptual Introduction Chapter 10. Hypothesis Testing with Two Categorical Variables: Chi-Square Chapter 11. Hypothesis Testing with Two Population Means or Proportions Chapter 12. Hypothesis Testing with Three or More Population Means: Analysis of Variance Chapter 13. Hypothesis Testing with Two Continuous Variables: Correlation Chapter 14. Introduction to Regression Analysis | <urn:uuid:7ef04915-4837-4203-9685-d1b5c9cba358> | CC-MAIN-2018-13 | https://www.bookdepository.com/Statistics-for-Criminology-Criminal-Justice-Jacinta-M-Gau/9781412991278 | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647498.68/warc/CC-MAIN-20180320150533-20180320170533-00224.warc.gz | en | 0.876909 | 515 | 3.09375 | 3 |
In a GPS world, do children really need to learn how to read a map?
Nothing is more refreshing than going for a walk in the great outdoors.
However, nothing can be more depressing than seeing people use their phones to ‘guide’ them.
It was recently reported that children are unable to read maps, you know the actual physical ones that blow about in the wind.
It shouldn’t come as any surprise to us that children find maps are real challenge. Map-reading is a challenge especially when combined with a compass. Get the hang of it though and using a map and compass is great fun and very rewarding.
The real issue here is the fact that phones have made life too easy.
Or have they?
They are not exactly reliable for getting us to where we need to go (sat-nav is rubbish on occasions) and besides which, phones are useless in mountainous areas – mobile coverage is patchy or non-existent (e.g. in Snowdonia).
In fact, some remote areas are a welcome relief from ‘modern life’ because they have no Wi-Fi, mobile phone signal or TV reception – great for a detox!
The point is many young people won’t be heading for the hills and mountains anyway so this won’t be a problem.
It’s doubtful that children will have even heard of Ordnance Survey and as for reading a six figure grid reference – forget it!
The problem is that loads of us have become so reliant on our phones that we use them as navigation guides to find a coffee shop or for getting around a shopping centre. Yes, smart phones are making fools of us.
Theme parks still give out maps so you can find your way around but take a look at how many people use them. There’s an app for that!
Of course children need map-reading skills because they involve a variety of skills and when your phone dies and you are miles away from civilisation then reading a map will save your life! Having a map and the skill to read it is a safety essential.
Learning how to read a map helps in critical thinking, analysis and orientation, problem-solving and fuels your spatial memory. It is an important tool for building children’s spatial reasoning skills and helping them make sense of our world. It gives them a mental map of their world too.
According to Sir Anthony Seldon, the vice-chancellor of Buckingham University, there is a risk that modern technology is “infantilising” people. During a speech at the Headmasters’ and Headmistresses Conference (HMC) he said, “Maps are not only a joy in life, but they’re also important to understand how space relates to other space.”
A survey conducted by Telenav, developers of Scout (a free satnav app) have produced an interesting infographic to highlight how digital navigation has taken over the lives of the under 25s.
This much we know – children aren’t good with maps. Many have never even held one so schools have to teach basic navigation as a way to develop character, independence and an appreciation of maths and science. Reading and drawing maps mustn’t become a lost art!
Maps don’t need batteries or a signal, they encourage engagement with our surroundings and they tell us what is around us.
Technology isn’t going to help us find the buried treasure, maps are!
Come on Geographical Association, isn’t it time you made some more noise about this issue?
The Ordnance Survey is the place to go: | <urn:uuid:dedd3b6c-d04b-4896-945a-1ef497c6ea92> | CC-MAIN-2021-10 | https://johndabell.com/2018/05/22/children-cant-read-maps/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00106.warc.gz | en | 0.953761 | 754 | 2.78125 | 3 |
How do I get my toddler to eat by himself?
How Do I Teach My Baby To Feed Himself?
- you pinch the food and let baby bring your fingers to her mouth.
- you hold the piece of food and let baby grasp from your fingers – this often elicits a pincer grasp before baby uses this fine motor skill to get food from a flat surface.
At what age should a child start eating by themselves?
You can begin introducing your baby to solids at around six months of age. By about nine to 12 months of age, your baby will show signs that they are ready to feed themselves.
What do you do when your toddler won’t eat?
If your child hasn’t eaten the food, take it away and don’t offer an alternative snack or meal. Avoid punishing your child for refusing to try new foods. This can turn tasting new foods into a negative thing. Avoid bribing your child with treats just so they’ll eat some healthy food.
Will a toddler starve themselves?
Kids won’t starve, but they will learn to be more flexible rather than go hungry. Present a variety of healthy foods — including established favorites and some new foods — to make up the menu. Your toddler may surprise you one day by eating all of them.
Should I force my toddler eat?
You can’t force your child to eat. However, you can provide nutritious foods, demonstrate healthy eating habits, and set the stage for pleasant mealtimes.
How do you make 3 year old eat by himself?
These tips may help.
- Keep expectations in check. Stasenko recommends parents remember this rule of thumb: Your toddler should eat a tablespoon of food per year of age. …
- Get the timing right. …
- Tap into that independent spirit. …
- Mix familiar with new. …
- Set the stage. …
- Be both consistent and flexible.
Should you spoon feed a 2 year old?
When will your toddler eat with a spoon? We look for toddlers to be feeding themselves with a spoon, completely independently by the age of 2. However, most kids are capable of learning much younger than that if they are given the opportunity. By one year of age, they can be proficiently and messily feeding themselves.
What age is considered a toddler?
According to the Centers for Disease Control (CDC) , kids between the ages of 1 and 3 are considered toddlers. If your baby has celebrated their first birthday, they’ve automatically been promoted to toddlerhood, according to some.
Can a toddler survive on just milk?
A serving of milk for a toddler is ½ cup. Try serving a small portion of milk just at meals and offering water in between. But do keep portion size in mind, especially if your toddler won’t eat anything but milk.
Is Picky Eating a sign of autism?
If you have a picky eater with autism, know that you’re not alone. A recent review of scientific studies found that children with autism are five times more likely to have mealtime challenges such as extremely narrow food selections, ritualistic eating behaviors (e.g. no foods can touch) and meal-related tantrums.
How long can a toddler go without eating?
It’s Normal for Toddlers Not to Eat
Yes, it really is normal for toddlers to have days when they are just not that hungry. Usually, a toddler’s appetite balances out over the course of a couple days. So, maybe one day they have a good appetite, but then the next 2-3 days they don’t want to eat much of anything.
How do you discipline a toddler?
- Show and tell. Teach children right from wrong with calm words and actions. …
- Set limits. Have clear and consistent rules your children can follow. …
- Give consequences. …
- Hear them out. …
- Give them your attention. …
- Catch them being good. …
- Know when not to respond. …
- Be prepared for trouble.
Is it normal for a 2 year old to stop eating?
Parents get very worried that their toddler isn’t eating and there’s something seriously wrong with them. Well, the truth is between the ages of 1 and 5 years old, it’s completely normal for a toddler’s appetite to slow down.
Why does my 3 year old not eat?
While picky eating is a normal phase for most toddlers, there’s definitely a time and place to call the doctor. Your pediatrician can rule out or diagnose possible underlying causes for your little one not eating, such as gastrointestinal disorders, swallowing problems, constipation, food sensitivities, or autism. | <urn:uuid:915916a7-e337-4a52-8cee-365b3df0f630> | CC-MAIN-2022-27 | https://thenewnormalpodcast.com/kids/best-answer-how-do-i-get-my-toddler-to-eat-on-his-own.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00143.warc.gz | en | 0.95173 | 1,010 | 3.0625 | 3 |
If you have more than one feline in your household, there may come a time when your ears are assaulted with the awful screeching noise of two cats fighting. Most of the time, these are merely playful tussles that sound a lot worse than they actually are. The noise fighting cats make can seem like they are in a fight to the death, even if they’re really just engaged in a mock battle or trying to assert their place as Top Cat in your household. As a responsible pet owner, it’s important to be able to distinguish between a real cat fight and a “play” fight. Play fights don’t require human intervention, but all-out cat brawls do, lest one or both of your cats get injured in the fight. Learn about the body language of cats and the signals that indicate a fight is for real.
The best way to break up a cat fight is to not let one get started in the first place, and understanding a cat’s body language is a great help. The problem is that with some cats, there is a bit of a “gray area” between play and fighting. Generally speaking, growling, hissing, arched backs, flattened ears, puffed up fur and big fat tails are not good signs. Subtleties aside, if you really take the time to observe your cats’ posturing and sounds, you can usually distinguish between the mock battles and a serious fight. | <urn:uuid:bcb7b89f-5dfa-48b2-bcbc-e879be3b0e1e> | CC-MAIN-2015-35 | http://www.canidae.com/blog/category/cat-fight | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00226-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.96447 | 305 | 2.96875 | 3 |
Tu Bishvat, the new year for trees in the Jewish calendar, has gained significantly in popularity over the generations. From a date of mainly fiscal significance for calculating agricultural tithes, it was expanded by the kabbalists of the early middle ages to include a fruit fest with deeply spiritual ramifications. With the advent of Zionism and establishment of Jewish agricultural settlements in Palestine, the holiday morphed yet again, becoming a day celebrating the Jewish nation’s efforts to restore forests on the denuded slopes of the Holy Land.
It seems that the first person to organize celebrations that included tree planting and communal ceremonies for students was the noted Jerusalem educator, Chaim Aryeh Zuta. Zuta’s memoirs describe his efforts to promote tree-planting on Tu Bishvat. A tree planting ceremony that he organized on Tu Bishvat 5673 (1913), in Motza in the Jerusalem corridor received extensive coverage in Ben-Yehuda’s newspaper, HaTzvi. The procession in Motza snaked “up to the vineyard of Cohen, the farmer,” where a celebratory arch had been erected to mark the site of the event.
Marching to the Music
Verbal reports aver that the scouts’ youth movement held a tree planting ceremony in the courtyard of the residence of the High Commissioner, Herbert Samuel, at Augusta Victoria, in the early 1920s. The Jewish High Commissioner was highly respected by the residents of Jerusalem, and so it is not unlikely that they would have wanted to honor him by planting trees in his courtyard, but neither Samuel’s memoirs nor those of his son mention any such an event.
In the late1920s, Tu Bishvat processions in Jerusalem became a regular institution. All decked out in greenery, elementary school children would assemble in the Lemel School yard, and march to the accompaniment of the Tachkemoni School orchestra. Sometimes the procession was led by mounted police, along with representatives of the police band. Each group marched behind its school flag, which was decorated with illustrations of flowers and baskets of fruit.
Photo: Shimon Rudy WeissensteinTree-planting procession in Tel Aviv, 15 Shevat, 1937
To the Ceremonial Arch
The connection between the Jewish National Fund (JNF) and Tu Bishvat only really began at the end of the 1920s. In 1930, the JNF distributed a booklet encouraging tree planting on the new year for trees. It included a description of the children’s procession from the Lemel school to the new neighborhood of Beit Hakerem, where that year’s planting ceremony was to take place:
The yard of the Lemel School is thronged with children, their faces lit with joy. They are crowned with garlands of spring flowers, and flags flutter in the light breeze. Suddenly a shrill whistle pierces the air. The murmurs die slowly away, and all eyes turn to the speaker, as the teacher lectures over the bobbing heads on the significance of the. Half an hour later the whole crowd begins to move – elbowing their way slowly, step by step, between the two walls of spectators lining the street. The orchestra at the head of the procession are playing a lively march, but bringing up the rear, we can hear nothing but the drum beats. All the houses are decorated with branches and flowers and greenery is everywhere, making it abundantly clear that today is a celebration of nature and of growth. The trees wave their foliage, as if to cry, “Welcome, children, young and old. Welcome! Go out and multiply the trees in the land, for salvation lies in its trees and forests.”
Finally, we leave the bustling, dust-filled city behind, and head for the hills. Sharp blasts of wind greet us, and the thousands of children burst into song. The procession stretches out, winding its way along the turns of the road like an enormous snake. And here is the new neighborhood, its red roofs jutting up like flowers from among the rocks [apparently a reference to Beit Hakerem or Kiryat Moshe]. A ceremonial arch, entwined with branches and flowers, waits silently for our arrival. We reach the street where the trees will be planted. A tender seedling stands by each shallow hole, tremblingly gently as it awaits its future. Its fate hangs in the balance – it will live, if handled with appropriate care. The hoes attack the vital clods of earth, and our hearts beat to the rhythm of the blows, as we pray: May this tree take root and flourish. The trees are planted, and the students scatter in all directions. One enjoys the treats he has brought with him, another plays, and a third runs aimlessly about, just having fun. | <urn:uuid:d38cabaa-dc51-4c32-98c7-b7d5a7722daf> | CC-MAIN-2019-26 | https://segulamag.com/en/articles/planting-seedlings/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00559.warc.gz | en | 0.963884 | 1,001 | 3 | 3 |
From 1 July 2015, Section 26 of the Counter-Terrorism and Security Act 2015 places a duty on schools to have "due regard to the need to prevent people from being drawn into terrorism”. This duty is known as the Prevent duty.
Children and young people can suffer harm when exposed to an extremist ideology which may be social, political or religious in presentation. This harm can range from a child adopting or complying with extreme views which limits their social interaction and full engagement with their education, to children being groomed for involvement in violent actions.
At Mulgrave, we recognise that extremism and radicalisation should be viewed as safeguardign concerns. We value and celebrate individual differences and believe in the freedom of speech and expression of beliefs. We create an environment that ensures both pupils and adults have the right and feel confident to speak freely and voice their opinions, in a respectful and thoughful manner.
Click on the links below for more information for parents/carers regarding radicalisation and the dangers associated with extremism: | <urn:uuid:9c36c246-d9f3-40f9-b404-453625169a47> | CC-MAIN-2021-31 | http://www.mulgraveprimary.org.uk/1108/prevent | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00347.warc.gz | en | 0.947348 | 205 | 2.90625 | 3 |
LIMA, Feb 2O (Reuters Life!) - Diana Rivas says it only takes her a few seconds to look at a brain to know what afflicted its owner.
“This one belonged to an alcoholic...This one belonged to somebody who had Alzheimer’s disease,” Rivas said as she passes row after row of brains suspended in preserving liquid and stacked on shelves in a tiny room in central Lima.
Rivas is a neuropathologist who runs a little-known brain museum in the Peruvian capital. She claims it is the only public display of human brains in the world.
The museum has an inventory of 2,998 specimens and is still growing. Rivas studies neurological diseases and psychiatric disorders but, unlike prestigious brain banks around the world, she also lets the public in to wander around.
It is not a tour for the queasy. On display in one room are several human fetuses with neurological disorders. Another exhibit shows a brain afflicted by the human form of mad cow disease which is known as variant Creutzfeldt-Jakob disease.
About 4,000 people, most of them children from local schools, paid the 30 cents entrance fee to trundle past Rivas’ brains last year.
Foreign doctors from Germany, Japan and France also visited the museum which sits at the end of a decrepit street where many taxis fear to go.
The museum started collecting samples of diseased brains in 1942. Rivas said she works with researchers on the effects of cysticercosis, an infection caused by pork tapeworm, on the human brain.
But her primary goal is to educate the public.
“The main purpose is for people to see what brain sicknesses look like, and realize that many of them can be healed or prevented,” she said showing how a healthy brain differs from one that has been damaged by drug abuse.
“Its true. Alcohol and drugs kill brain cells.”
The museum is tucked in behind a 300-year-old building that is now the National Neurological Science Institute hospital. It operates on a shoe-string budget.
The world’s largest brain bank, the U.S.-based Harvard Brain Tissue Resource Center at McLean Hospital (http://www.brainbank.mclean.org) has nearly 7,000 specimens. Its brains are not open to public viewing but researchers can obtain tissue samples from the federally-funded institution as long as they are qualified scientists.
George Tejada, assistant director for the Harvard brain bank who was born in Peru, said he had heard of the Lima brain museum but had never visited it.
But he said it is not the same as brain banks which specialize in the study of specific neurological disorders and diseases.
“I’ll be sure to go and visit when I’m next in Lima,” said Tejada. | <urn:uuid:15f7347e-ccd9-4e6d-ab5b-94446c1f2554> | CC-MAIN-2019-43 | https://www.reuters.com/article/us-peru-museum-brains/diseased-brains-on-display-at-peru-museum-idUSL2064793720070221 | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00420.warc.gz | en | 0.962634 | 599 | 2.546875 | 3 |
Perennial plants regrow every year, unlike annuals, which grow, bloom for a season, and then die. Perennials faithfully bring a bright spot to a garden each year on a regular schedule. Perennials require care to ensure they produce the best blooms. Matching the right fertiliser, correct soil pH balance, and the necessary amount of water and sun is important to having a colourful, show-off garden.
Other People Are Reading
Perennials For Sunny Locations
Yarrow are well-liked garden flowers, with colours ranging from yellows, reds and whites. Columbines have large blooms and come with blue, white, red, pink or yellow blossoms. The Michaelmas daisy features violet, white and blue flowers and have the bonus of being dividable every other year, giving the gardener plants to locate elsewhere or to give away. Garden chrysanthemums bloom in the fall after many other flowers have died away, giving a garden yellow, bronze, lavender or white flowers.
Perennials For Shady Locations
It is more difficult for plants to thrive in the shade, but some have adapted to doing so. Almost all perennials will do fine in light shade, but dense shade requires careful planting. The fringed bleeding heart gives colour from spring to fall, and black snakeroot provides summer colour. Many shade-tolerant perennials also double as ground cover along with providing flowers that blossom for a short period. These include wild violets, lilies of the valley, goutweed and wild ginger.
Growing native plants is becoming popular with many gardeners. They are already well-adapted to the climate and soil and often require much less care and water than plants imported from other areas. Examples of flowering perennials native to North America are the deep blue flowers of the pickeralweed, the yellow flowers in May from the marigold, and the fluffy, bright-pink flowers of queen-of-the-prairie. The black-eyed Susan is well-known for its golden yellow blossom with a dark-black centre on a tall stem. The wild bergamot has lavender flowers that attract bees and butterflies, and the leaves can be used to make tea.
Some flowering perennials are hardier than others. A few that need extra care to survive a cold winter are sweet william, silver king artemisia, feverfew and larkspur. Early spring-blooming flowers include winter aconite., leopard's bane, columbine, pasque flower and basket of gold. Fall flowers include asters, summer lilacs, tussock bellflower and common rosemallow.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for | <urn:uuid:433047e2-5c53-43f5-9dd6-8de055987cf2> | CC-MAIN-2017-09 | http://www.ehow.co.uk/info_8143019_list-colorful-flower-perennial-plants.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00196-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926619 | 593 | 3.265625 | 3 |
Did you know that antibiotics are overused and this increases the development of drug-resistant germs? Antibiotics are used to treat BACTERIAL infections, NOT VIRAL. Treating viruses with antibiotics does not work, and it increases the likelihood that you will become ill with an antibiotic-resistant bacterial infection in the future. Antibiotic resistance is not just a problem for the infected person, but some resistant bacteria have the potential to spread to others, promoting antibiotic resistant illnesses.
Make sure if an antibiotic is recommended by a medical provider, it is taken exactly as prescribed, and complete the course until it’s gone even if you start feeling better. You can help prevent the spread of infections by practicing good hand hygiene and getting recommended vaccines. Remember, if your provider does not think you need an antibiotic, do not ask for one. Antibiotics often times have side effects and they may do more harm than good if you don’t really need them. In children, reactions to antibiotics are the most common cause of emergency department visits for adverse drug events. | <urn:uuid:d8894b60-c3d4-473b-b375-82debf084213> | CC-MAIN-2019-22 | https://www.waynecountyhospital.org/services/health-coach-services/disease-spotlight/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259126.83/warc/CC-MAIN-20190526105248-20190526131248-00244.warc.gz | en | 0.947282 | 219 | 3.984375 | 4 |
A vulnerability assessment is something neccesary that leads the customer or the user to a full picture of the situation. It lets you know the exposure state of your systems to the vulnerabilities. In order to make it possible there are several automized tools. These tools run deep controls on each system or application and recognize vulnerabilities. Another important aspect to consider is speed. Their speed makes it possible to scan a wide perimeter in a short time providing a good detail level.
Vulnerability Assessment: what is it?
After this short introduction we can start giving a brief definition of a VA. It is a security analysis that has the goal of identifying all the potential vulnerabilities of systems and applications. How? Spotting and evaluating the potential damage the attacker could inflict to the productive unit. Highly qualified personnel, in a second moment, integrates and verifies the results through manual activities. These activities have the purpose of refining the research highlighting eventual errors during the process. Promptly isolating real vulnerabilities is one of the key aspects of this kind of assessment. A good Vulnerability Assessment Service allows the user to keep an updated overview of the security level of his assets and IT systems. Obviously, this is the starting point to optimize all the efforts for managing security.
Vulnerability Assessment: the must haves
In order to have a tool that can respond to a company needs in a complete way, there are a few must-haves:
- It must recognize and spot a large number of different vulnerabilities such as SQL injection, Cross-site Scripting, and much more…
- Compliance. This is a key factor ( GDPR Infographic ) in order to avoid penalties and loss of reputation.
- A great understability. Results must be clear and easy to access. To make it clear, if you have the most detailed and deep results but they are exposed in a difficult way, they are almost useless. Basically, you need a clear exposure combined with a good level of depth.
- Connected with the previous point, two different reports would be great. Data must be clear both for top management in order to take the right decisions and for IT technician that can focus their attention on the right subjects.
Vulnerability Assessment: detailed insights
Insights provided by the assessment should be – as stated before – the more detailed. There are some key points and key areas that need to be highlighted in order to have a full understanding of the matter. Moreover, these themes are extremely important and need a special analysis effort. We can summarize some of these categories:
- Division of vulnerabilities according to their risk rank (High, Medium, Low for example)
- Strategic areas hit by these vulnerabilities (Confidentiality, Access Control,…)
- Likelihood of Exploits. How high is the possibility that someone could take advantage of this vulnerability and damage my company? A priority list is something necessary for every company in order to better understand which aspect needs more attention.
- History of previous assessments in order to have a results continuity.
- Detailed information
Vulnerability Assessment: how can I protect my business?
In order to assure to your business the best tool available, Swascan developed a special cybersecurity platform. It is completely in Cloud, Pay per Use and SaaS. You can see for yourself in our brochure: Cybersecurity platform and have an in-depth look at our services. Our four services cover all the governance needs in terms of risk management and periodic assessment. Basically, if you need to understand the areas in which your efforts must focus, GDPR Self-Assessment, Vulnerability Assessment, Network Scan and Code Review are the right tools for you. Last but not least, don’t forget GDPR: our platform is 100% GDPR compliant. | <urn:uuid:d4f96969-94b3-4585-9334-af2b0dfdd8be> | CC-MAIN-2022-27 | https://www.swascan.com/swascan-vulnerability-assessment/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00384.warc.gz | en | 0.946275 | 772 | 2.546875 | 3 |
An astronaut orbiting Earth in the International Space Station has remotely directed a NASA rover in California to unfurl an “antenna film” that CU-Boulder scientists are developing for use on the unexplored far side of the moon.
When astronaut Chris Cassidy used a Space Station computer to pilot the robot across a mock lunar surface at NASA’s Ames Research Center on June 17, he demonstrated for the first time that an astronaut in an orbiting spacecraft could successfully control a robot in real time on a planetary surface. The technique could have future applications for humans visiting Mars, an asteroid or the moon.
Jack Burns, director of CU-Boulder’s Lunar University Network for Astrophysics Research, or LUNAR, hopes it will not be long before the latter happens. Burns has long advocated for placing a radio telescope on the far side of the moon that would be able to pick up “faint whispers” from distant regions of space that would tell the tale of a time when the universe was quite young — 100 million years after the Big Bang — and the first stars and galaxies were being born.
“It would open up a time period in the universe that we are not able to explore with any other technique or technology,” said Burns, also a professor in CU-Boulder’s Department of Astrophysical and Planetary Sciences. “This would be an absolutely unique telescope that will allow us to address fundamental questions about the very early universe.”
Placing the radio telescope on the far side of the moon is critical because it would shield the receivers from the radio cacophony emanating from Earth and it would raise the telescope above Earth’s charged ionosphere, which can distort and refract incoming radio signals from space.
With the development of NASA’s Orion spacecraft, it will soon be feasible to send astronauts to a location 60,000 kilometers above the far side of the moon known as the L2 Earth-moon Lagrange point. At that spot, the combined gravity of the Earth and the moon would allow for the spacecraft to easily maintain a stationary orbit. From there, Burns and his colleagues believe a rover could be sent to the moon’s surface and manipulated to roll out a “Kapton film” that would contain the radio antennas.
To test their idea, the CU-Boulder researchers, including graduate student Laura Kruger, partnered with NASA’s Human Exploration Telerobotics project, which was already working on the technology that would allow robots on a planetary surface to be controlled from orbit. NASA agreed to use Burns’ vision as a test scenario. June’s successful trial, during which Cassidy piloted a K10 robot for three hours in an area the size of two football fields, is the first of three planned this summer.
The K10 robot is a four-wheel-drive rover that stands about 4-and-a-half-feet tall, weighs about 220 pounds and can travel about 3 feet per second — a little slower than the average person’s walking pace. For the Surface Telerobotics tests, K10 is equipped with multiple cameras and a 3D scanning laser system to perform survey work, as well as a mechanism to deploy the simulated radio antenna.
“During future missions beyond low-Earth orbit, some work will not be feasible for humans to do manually,” said Terry Fong, director of NASA’s Intelligent Robotics Group. “Robots will complement human explorers, allowing astronauts to perform work via remote control from a space station, spacecraft or other habitat.”
Burns hopes that the success of NASA’s Surface Telerobotics test will help bolster the far side of the moon telescope project and generate interest in exploring a mysterious region of our nearest celestial neighbor.
“The land area at the far side of the moon is twice as large as the United States; it’s a big piece of property,” Burns said. “We haven’t set foot there, either robotically or with humans, and it’s right in our backyard — just three days away.” | <urn:uuid:6dc73b3c-c5ea-4e14-b0b0-9573d37e068f> | CC-MAIN-2016-07 | http://www.colorado.edu/news/features/astronaut-testing-feasibility-cu-boulder-project-far-side-moon?qt-main=2 | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158481.37/warc/CC-MAIN-20160205193918-00039-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.92766 | 858 | 3.796875 | 4 |
The photo editor Curve Tool actions are similar to, but in some ways more versatile and more powerful than the Levels Tool — But Levels is of coures quite important too. The intent on this page is to explain the action of both tools (Levels and Curves, CTRL L and CTRL M in Adobe), but specifically also to show HOW the two tools do many of the same jobs. Understanding concepts of how they are similar helps to understand both.
Histograms: Our starting point is to assume you know that the histogram shows the distribution of the actual tonal values in the image. Specifically, it is a bar chart of the relative count of pixels with each tone from black at 0 to white at 255. IMO, the most important thing about a histogram is how closely the data fills the entire width, but of course, to be sure there is no tall spike of clipping at the either end. However, the histogram data maximum height is not important numerically, because it is always scaled so that the maximum does reach the top (which aids seeing the tiny shortest ones).The taller histogram graph spikes just means relatively more pixels exist with that tone value. There is no "correct" value. Black scenes will have many dark pixels, and white scenes will have many bright pixels. The histogram is not a light meter, it only shows what did happen (it has no clue what the scene is, or how it ought to be). There is rarely any image detail we can identify in the histogram, but it does show the overall extent from dark to light tones (which depends on our exposure).
Both the Curve and Levels tools show the image histogram data. To be able to adjust the data, we have to see the data to know what it needs. We do think in terms of adjusting exposure to align the data towards the right edge, to be "bright enough" - but primarily the main point of looking is specifically to insure there is no clipping at 255 (a tall thin spike right at 255 indicates clipping), which would be detail lost (and colors changed) due to overexposure. A second useful idea might be that any empty areas at either end are “wasted” capability, and is often better if adjusted so the data spans the full available range (meaning the whites will actually be white, and the blacks will actually be black ... which is more contrast). But which depends, any far right alignment necessarily assumes our scene actually does have bright data which ought to be far right (not always true of every image - for example, probably true of the picture of a polar bear in the sun on the snow, but not true for the black cat in a coal mine). So the histogram shows what is (resulting from the exposure), but it cannot recognize the scene to know how it ought to be. However the human photographer’s brain should have a good idea how it ought to be (so do use your brain). We do like dark things dark, and bright things bright (contrast). There are two types of common histograms, very different and important, see this page. The histogram data numerical values are also gamma encoded, see this page
The Curve tool may actually seem to be simpler to understand than Levels, because it is graphic. It "maps" the data to a curve we draw on it. That curve is same as any graph, associating points along the horizontal axis with points along the vertical axis. Basically, the curve tool is a graphic "response curve" of the response you want. It just maps tones, a conversion from "input" tone to "output" tone (a before and after concept, of what this tool does). Input is the image data that you read in (before). Output is the image data that you will write out (after)... with modified tonal values, which is the purpose of editing. The graphic curve is the response curve of the tool, the specific map to convert input to output. The histogram is shown faintly in the background, which is the data which will be processed, which could be a guide. Shown next is a standard Curve Tool (from Photoshop):
Input is shown across the bottom, and Output is shown upward on the right. The gray scale gradients show the tone values of the scale, and they have numerical values too (you may or may not notice that these scales in a grayscale image's curve tool probably are reversed to have white at the origin, but everything is still all the same).
The Key is that the default response curve is a straight linear line, from lower left to upper right, so that all tones map to their unmodified original values. The default of the 45 degree line is that 255 input is mapped to 255 output, midpoint is mapped to midpoint, and 0 input is mapped to 0 output, which is no change at all.
However, the shape of the curve is very fluid, and we can simply drag the curve around, to change the response to be any shape that we want. Above, in this case (a standard S-Curve for contrast), we did three things:
This new curve is the new response curve of the tool, mapping the tool input to its output. The marked Red line arrows above (for example) shows how we read the Curve, and shows that this new curve shape maps Input 192 to be Output 222 (brighter, instead of the default unmodified 192 output). Said another way, tones at the 3/4 point are brightened to be about 7/8 scale. If we also imagined more of these imaginary red arrows in the bottom half of this Curve, those tones would be made darker (lower output, this case). Then all the other tones follow the curve too, as shown. It is really this simple, just a simple response Curve, showing what it does (whatever you want it to do). Often the exact numbers are not important to us, but we do work with the zones, like brightest brights, darkest darks, middle tones, etc. This example curve shown is called an S-Curve, which creates brighter brights and darker darks, which causes greater image contrast (see more at Contrast at bottom of page here).
Both tools (Levels and Curves) provide the default RGB map, which affects all three RGB channels equally (changes tones). Or we can select and modify the three RGB channels individually (changes tone and colors).
The eye dropper tools are same in Curves as in Levels, and the black and white pointers can set Black Point (new point to become zero), or White Point (new point to become 255), or the middle pointer is about gamma, which is a great tool for brightness of the image.
The two more common uses of the curve tool are shown near page bottom, but first, let's detail the way these two tools can do the same jobs.
The Original image. All images on this page are this one same exact image (repeated a few times below), with then different degrees of example processing. My settings are all overdone, without any logic image-wise here, just to show a recognizable change by the tool. Obviously we use the tools to make a desirable change to the image. The histogram of this image shows a huge spike near the white end, which is the white background, and the white cup. If we had used a black paper background, the histogram would look very different, with a spike near the black end.
The dramatics first, because it shows the idea really well about how the Curve Tool works. It simply maps input tone values against the Curve, to compute new output tone values.
For example, to invert the image (color or B&W), just drag the lower left end of curve up to the top, and drag the upper right end down to the bottom. Then the curve slope instead runs downward at 45 degrees to invert a negative image. Because then, input at zero (black) becomes output at 100% (white), and input at 100% (white) becomes output at zero (black). This is the standard inversion technique. The idea is that negatives become positive. Inverts colors to their complement too, orange becomes blue, red becomes green, etc (180 degrees across on the color wheel). The unmodified straight line 45 degree inverted curve maps 255 to 0, and 0 to 255, which numerically, all RGB components become (255-previous), or (255-255=0 and 255-0=255), which is inverted. As marked in red, a bright value at 3/4 scale outputs at 1/4 (darker), and at 1/4 outputs at 3/4 bright.
This inversion works fine for B&W negatives, and would work for color negatives, except they have the orange cast which is very significant, and quite difficult. The orange inverts to be a strong blue background mask. We could then drag this new slope to be other response shapes too (see this page for more of that, concerning color negatives).
The Levels tool cannot do inversion, but there should be a regular menu Image - Adjustments - Invert that does it.
Levels may not do Inversion, or the special intricate curved shapes, but otherwise, the images are in pairs below, to show both of the Levels and Curve tools doing same thing, to show that in many cases, they can actually do the same thing, just done in slightly different ways. Realizing this similarity helps understanding. The Curve Tool does anything Levels can do, and much more too.
I am not knocking Levels in any way, I use Levels greatly more than Curves, very handy. Because, Levels is more specific purpose. Levels only has five adjustment sliders. Four mainly only specify end points. In organizing it here (related to Curves), these four Levels sliders are the Top Right corner of the Curve, moved in or down, and Bottom Left corner of the curve moved in or up. Moving either corner inward is Levels Input (the more common use), and either corner moved vertically is Levels Output. The fifth Levels slider can move one point to bow the curve up or down, but the curve tool can do that too, or move and reshape it in about any other way. But otherwise, the four Levels controls simply limit the Curve in these four ways.
By definition, the White Point is the end point at 255 (right end of histogram). If you move it, the data is shifted so that this new point selected will become the new White Point at 255. Anything brighter will be clipped to stay at 255.
The Original image.
Levels, White Point 205 (205 becomes 255). Any value above 205 becomes 255 (clipping).
Whites got whiter, including the cup and the background (205 became 255, very white).
Curve, White Point 205 (205 becomes 255). In both tools, any value 205 to 255 becomes 255 on output (clipping), but now the Curve explains graphically why - it seems more clearly why. It is simply the shape of the response curve, how it specifies the tones are to be mapped. You can drag the Curve, or you can slide the White Point/Black Point markers on the bottom scale.
Some big things to know:
Both tools make the new marked White Point become 255, and any values above it are clipped (all those values above it become 255 too - a loss of detail). Changing the composite RGB slider (as shown) changes each of the three RGB channels in the same degree, which mostly increases overall brightness (towards white), instead of changing color balance. But changing the individual RGB channels differently skews and changes color, perhaps an intentional change or not. An original idea of White Balance was for the maximum extent of the data in the three RGB channels to be at the same White Point. Photoshop Auto Levels still has this action as default (Enhance per channel contrast) to align the individual end points of the three RGB channels while clipping very slightly (0.1% - see Options in Levels). Sometimes Auto Levels fixes terrible color (faded colors), sometimes it hurts good color. We can use Undo.
These White Point tools are the SAME action (same thing) as the Exposure slider in Adobe RAW - this is what it does. White Point leaves Black Point where it was, but stretches White to more "exposure". For "less" exposure, use the Output Levels below. The Black Point tools are the SAME action as the Blacks slider in Adobe RAW.
A BIG DEAL we should know: In Adobe, both the Levels and Curve Tools, and also these Exposure and Blacks tools in Adobe RAW (ACR) have the default that if you hold the ALT key (or Mac Options key I think) down while sliding these, the image preview changes to be blank (all white or all black), and then shows only the pixels that have been clipped at present setting (shows those pixels which now have value 255 or 0). The curve tool also has the check box to show this, but it is not default. This is quite powerful to show what you are actually clipping, to let you see what is actually being clipped, and thus is losing detail. Sometimes losses matter, sometimes it doesn't matter, but this is how you can know what is really happening. Same ALT key action works on Black Point too.
The usual way this tool is used is to set the White Point to about the histogram point where the image data actually begins,
more as shown in the tiny picture here. This excludes the range of tones not present in the image, and maximizes the contrast of those present. The extent of the data actually present becomes White at 255. We might quibble where the data actually ends, but clipping away any trailing tail here is often a good thing to do (perhaps even more in grayscale images). Black point too, same thing. By eliminating the “blank” histogram ends, we expand the range of the existing data to fill the full 0..255 span.
The Yellow flower on the cup is the brightest thing in this image, but on images with a white background, often that background is the brightest thing, and makes the largest spike on the histogram. Sometimes placing the white point at that white spike is good to make the background actually be white (something like shown done here, but this one is different, the yellow flower is the brightest here (holding ALT shows that instantly).
These tools (in Photoshop) have the three eyedropper icons in them, colored black, gray, and white.
Accurate white balance really makes a tremendous difference, and this is one tool we should learn. When your flash pictures seem OK, but they are just yukky anyway, it's probably white balance. We often intentionally include some neutral color object in the image, or in a test image, for this purpose - which could be a white balance card, or a gray card, etc. White paper is said to have brighteners which can skew its color, but most cheap copy paper or craft store thicker paper normally works well for me (vastly better than nothing). However I prefer to use this actual White Balance Card. This eyedropper tool makes White Balance life be fairly simple, especially with RAW images.
By definition, the Black Point is the end point at 0 (left end of histogram). If you move it, the data is shifted so that this new point selected will become the new Black Point at 0. Anything darker will be clipped to stay at 0.
The Original image.
Levels, Black Point 60 (60 becomes 0). Blacks became blacker.
Levels, Black Point 60 (60 becomes 0)
In both tools, any value 0 to 60 becomes 0 on output. This makes the new marked Black Point become 0, to now be as black as possible.
This is same as "Blacks" in Adobe RAW. Can hold ALT key to see clipped values in either one.
The Output White Point marks the tone that will become the brightest tone in the image. If you move it, any data at 255 becomes this lesser tone, and there will be nothing brighter.
The Original image.
Levels, Output 200 (maximum white level, 255 becomes 200, white becomes more gray)
Curve, Output 200 (maximum white level, 255 becomes 200, white becomes more gray)
This might be used to prevent a screened image to retain slight evidence of the screen dots, instead of printing nothing on blank white paper.
Both tools make the original value 255 move down to become the new marked White Output point, as bright as it gets, a maximum in the image, nothing brighter than this value. The curve shows there is no possibility of anything brighter than this point.
The Output Black Point marks the tone that will become the darkest tone in the image. If you move it, any data at 0 becomes this greater tone, towards gray from black, and there will be nothing darker. This is typically used in prepress printing, to prevent excessive build up of black ink in dark areas.
The Original image.
Levels, Output 60 (minimum black level, 0 becomes 60, black becomes more gray)
Curve, Output 60 (minimum black level, 0 becomes 60, black becomes more gray)
This is sometimes done in printing processes (like newspapers), raising black to be a few percent lighter, to prevent the ink from being a dense blob buildup problem. In more extreme degree, it could also be used to create faint "ghost" images.
Both tools make original value 0 move up lighter to become the new marked Black Output point, as dark as it gets, a minimum in the image, nothing darker than this value is possible.
The editor tools actually named Brightness and Contrast are definitely NOT the best tools in the bag. Levels and Curve do it better, are much more versatile, and offer more control.
The standard editor tools named Brightness and Contrast are about the oldest tools, dating back a few decades (well before “digital”). They are relatively poor tools today, but some editors retain them instead of adding Levels and Curve.
The separate tool named Brightness simply adds a constant to all pixel RGB values, to shift the data graph to the right, which is detrimental to contrast (whites are clipped, and blacks become gray). Those old tools do not watch and show the histogram (blind work in the dark, except by final appearance). That is really too dumb to consider today, but we see these versions. These better Curve and Levels tools below do not affect the end points (unless we so choose using additional operations).
The Level and Curve tools do something similar but better, to make the image brighter (or, they can lower it to darken the image too).
The Original image.
Levels, Brightness - the center slider raises the middle tone values (brighter). The end points and dynamic range are unchanged and unclipped. The center slider of Levels is called Gamma, said to be 1.0x relative to the image gamma value (2.2),. The center slider multiplied the value of gamma, boosting the center and dark values too. If you set the slider value to 1.15, then gamma 2.2 is considered to be changed to 2.2 x 1.15 = 2.53. This is a 1.15 multiplier, and any gamma works. Gamma is not necessarily 2.2, but the standard sRGB is 2.2. Watch the image preview brightness to judge the change. You likely don’t care about the numerical value, you do this to make the image brighter or darker, as desired. Moving the White Point in mostly affects the brightest tones, and this midpoint control mostly affects the middle and darker half. But changing the middle slider does not shift the end points, and cannot cause clipping.
Curve, Brightness - Raising the curve increases middle tone values (brighter). The end points and dynamic range are unchanged and unclipped. Just pull the curve up a bit. Watch the image preview to judge it.
These Levels and Curve tools are far better ways (better than the usual old tools named Brightness or Contrast) to make images be brighter (or darker), because they do not shift the end points, and do not cause clipping and do not change dynamic range or image contrast.
The other older separate tool named Contrast simply moves both ends inward (white point and black point), like Levels can do, but inward equally in that case, intentionally clipping both ends, but with less control and no visibility of image data (without seeing the histogram data as Levels and Curve shows). It is not as smart as being able to see the data graph, and to have individual control of the two end points. Ordinarily, in general, we normally position those points at about where the data actually starts (unlikely often at equal positions). But everything has exceptions, and sometimes we may choose to clip them a bit, usually unequally.
The Level and Curve tool action is different regarding not clipping the end points, but they do a somewhat similar thing to increase contrast.
The Original image.
Levels, Greater Contrast is Brighter whites and Blacker blacks, done by clipping both end points slightly and individually, by manual choice after seeing the histogram data. This is a very standard procedure for contrast. Watch the image preview to judge it, but again, realize you can hold the ALT key to see what detail you are clipping. Some things don't matter if clipped a bit, some things do matter. A little clipping can often be a great thing for grayscale contrast, if it does not cost important detail. A little clipping with White Point can affect colors in color photos, but grayscale thrives on contrast, and a little clipping is often done for that purpose. Just use the Preview to watch what you are clipping. To know what you are clipping, see the Big Deal above again that explains it.
Color can be the contrast for color work, but greater actual contrast is especially important for grayscale images... The best one trick for grayscale images is adequate contrast, by specifically being sure there is something that is really black, and something that is really white (which is a tip from Ansel Adams). He was speaking of contrast for Black & White work, and he was NOT speaking of portraits. A little clipping does this, but it should not be enough to lose any of the important detail in highlights or shadows. (See more). IMO,a little clipping really can help B&W photos, but too much contrast can make color photos look glary. And clipping can shift the colors a bit in color photos.
Curve, Greater Contrast - We could have moved the end points in like the contrast examples above (#2 and #3), to do the about same thing with Levels, but both Levels and Curve would have clipped the ends then.
But instead, this S-Curve shown here is specifically one of the great features of the Curve Tool. This S-Curve is brighter whites and darker blacks too, but the end points are unchanged and are not clipped, more suitable for color images. The end areas are not clipped, but still offer subtle graduation of tone. The central linear portion still has a steep slope - more values are either darker or lighter, with less middle range.
The center is clicked first to pin it, so it won't move with the curve. S-curve is a standard contrast procedure, maybe the best contrast method, especially for color work. Watch the image preview to judge it.
These two tools are the best way to address contrast, because of the individual control they offer (of the two ends of data, with respect to the data).
Here is another older page about the same Brightness and Contrast tools mentioned here. | <urn:uuid:a206b942-9911-43d4-a9ad-919963a5f05e> | CC-MAIN-2023-50 | https://scantips.com/lights/curve&levels.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00372.warc.gz | en | 0.931486 | 4,930 | 2.53125 | 3 |
Noah - weekly Torah portion
"Noah walked with God”, we are told, and “Thus Noah did; according to all that God had commanded him, so he did.” Overall he was a good man, he did what he was told to do. He does not ask questions, he does not annoy. “God looked on the earth, and behold, it was corrupt”, the people are evil, and God decided, “The end of all flesh has come before Me; for the earth is filled with violence because of them; and behold, I am about to destroy them with the earth.” All but very few are going to perish and Noah, Noah he works on the ark. He does not ask questions, does not wonder the issue, and he certainly does not fight the decision to get rid of so many people. As we said, overall he was a good man, he did what he was told to do.
Abraham, the father of the nation, when he hears of a much small scale of disaster, two cities, Sodom and Gomora, and he speaks up, he is unwilling to accept the Godly verdict. The two cases, that of Noah and that of Abraham are similar, just that in Abraham’s case it is not practically all of humanity that is eradicated, only two cities, cities of corruption and evilness. But Abraham? Abraham argues, “Will You indeed sweep away the righteous with the wicked? Suppose there are fifty righteous within the city; will You indeed sweep it away and not spare the place for the sake of the fifty righteous who are in it? Far be it from You to do such a thing, to slay the righteous with the wicked, so that the righteous and the wicked are treated alike. Far be it from You! Shall not the Judge of all the earth deal justly?” Abraham really knows that there is no way that there are fifty righteous people there so when he gets some he continues, “Suppose the fifty righteous are lacking five, will You destroy the whole city because of five?” And He said, “I will not destroy it if I find forty-five there.” Can we find this kind of willingness to confront God? He just goes out and builds the ark, he takes care of those who are close and dear to him but fails to see the entire picture.
Moses, the greatest prophet of our nation, finds himself time and time again in a similar situation that Noah did many generations before, on each one of the many occasions when the Israelites come to him full of complaints. When God suggests to get rid of the People and leave only Moses and his family, Moses is there to defend the People. Not only his immediate family, he does not build and Ark for them or himself, one which will sail over troubled waters. Rather he confronts the issue and changes the verdict. While Abraham went out of his way to defend complete strangers, Moses defended his own People, but both of them, Moses and Abraham, looked at the broad picture that went way beyond the selfish view of their immediate relatives. Even though they know that they may be harmed by those that they save today, they still find ways to defend them.
It is told that when rabbi Levy Yitzchak of Berditchev was appointed the chief justice of the town the leader of the community resolved that they will not have him come to confirm laws and rules which are old and known. There is no need to have this important man deal with those, it was argued. One day they were busy putting in place a new rule that had to do with beggars. Instead of having them roam the streets and knocking the doors of houses around the town they would instead come once a month to receive a portion of Tsdaka from the synagogue coffers for Tsdaka. They of course asked the rabbi to join and he said that there was no need to do so as this was an well-known rule and that there was no need for him to talk about it. The leaders of the community, surprised, asked him to explain. He responded that this was already known from the times of Sodom and Gomorra, who’s intolerance towards Tsadaka was discussed in the Bavli. In this way the rabbi defended those weak people of society who do not have someone else to speak for them, and of course none of the leaders wanted anything to do in connection of the evil people of Sodom and Gomorra.
Noah was a good man, and he walked with God, but may be forget to spend some time with his human peers. Maybe he was the most worthy person to be saved in his generation but not necessarily the best to learn from when dealing with the relationships between humans. When trouble times hit us we should not learn from Noah but rather from Abraham and Moses. Take a stand that can save not only us but also others who may be otherwise doomed. It is not always possible to save all, but may be if we try we can save some. In some senses the way to save begins with our very own behavior, the things that we demand from ourselves and can provide as an example. A behavior that allows us to lead through paths that may be tough and that requires dealing with enormous forces. Sometimes they are there simply to challenge us and when we are willing to be challenged we also find a way around that results in some of the most magnificent solutions.
Reuven Marko, 12 October 2018, 4 MarChesvan, 5779 | <urn:uuid:8aa1cd2f-f90f-40fe-9ecf-9ecb098bff00> | CC-MAIN-2020-45 | https://www.domim-reform.org.il/single-post/2018/10/12/noah-weekly-torah-portion-1 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00688.warc.gz | en | 0.985829 | 1,146 | 2.609375 | 3 |
- Open Access
Rapid changes in tree composition and biodiversity: consequences of dams on dry seasonal forests
Revista Chilena de Historia Natural volume 88, Article number: 13 (2015)
Plants in a seasonal environment that become close to the artificial lake after dams construction may have enhanced growth or die due to the new conditions. Changes in mortality or growth rates lead to changes in community diversity, and we do not know if the community functions will change; our main hypothesis was that a few years after impoundment, species richness and diversity will increase because the increased supply of water would favor the establishment of water-associated species. Therefore, we evaluated the consequences of proximity of three dry seasonal forests to the water table after damming, with a dynamic evaluation of the species studied to understand changes in diversity in these areas. We sampled 60 plots of 20×10 m in each forest and measured all trees with a diameter equal to or greater than 4.77 cm before damming and 2 and 4 years after damming. We calculated dynamic rates and compared species changes during these periods. We also compared diversity and richness using Shannon index and rarefaction curves.
Many species had high dynamic rates and many trees of specialists of dry forests died; conversely, others had high growth rates. Some typical species of riparian forests were found only after damming, also enhancing forest richness in deciduous forests. In general, the deciduous forest communities seemed to change to a typical riparian forest, but many seasonal specialist species still had high recruitment and growth rate, maintaining the seasonal traits, such as dispersion by wind and deciduousness in the forests, where an entire transformation did not occur.
We conclude that even with the increment in basal area and recruitment of many new species, the impacts of damming and consequent changes will never lead to the same functions as in a riparian forest.
Dams have historically facilitated human life, initially in farming, transport, and domestic services, and are currently mainly built for energy generation (Baxter 1977). At least 45,000 dams over 15 m high obstruct 60% of fresh water that flows to the oceans (Nilsson et al. 2005). Dam construction increased because hydroelectric power was considered a clean and inexpensive alternative for energy production (Kaygusuz 2004), responsible for 16% of worldwide electricity generation in 2005 (Evans et al. 2009). Despite the spread of this “clean and inexpensive” idea, several problems are actually known, such as entire watershed modification (Nilsson and Berggren 2000), sediment retention (Manyari and Carvalho 2007; Vorosmarty et al. 2003), biochemical alterations (Humborg et al. 1997), and emission of greenhouse gases (Fearnside 2002; St Louis et al. 2000).
Water-dwelling organisms (fish, amphibians, plankton, benthos, and macrophytes) are directly affected, causing drastic changes in food webs (Brandao and Araujo 2008; Moura Júnior et al. 2011). The artificial lake created also interferes with terrestrial organisms. Wildlife can move to habits outside the flooded area, but sessile organisms such as plants are drowned (Fearnside 2002; White 2007). Plant decomposition releases organic matter and depletes water oxygen (Barth et al. 2003; St Louis et al. 2000), also releasing carbon dioxide (CO2) and methane (CH4) to the atmosphere (St Louis et al. 2000). However, organisms living in the direct flood influence area are not the only ones affected. The vegetation distant from any water source before damming is afterwards near the margin of the lake created by the dam, and long-term consequences are difficult to prevent because this new “riparian vegetation” is completely different from an original riparian environment in species and characteristics.
Riparian vegetation includes species adapted to water saturation and species adapted to low water patches, and thus commonly shows high diversity (Naiman and Decamps 1997). These environments are associated with many ecological services, such as connecting aquatic and terrestrial habitats (Dynesius and Nilsson 1994), providing resources for fish (Jansson et al. 2005) and other dispersers such as birds and mammals (Gundersen et al. 2010; Naiman and Decamps 1997), and promoting refuge for these animals (Palmer and Bennett 2006), thus playing a key role in diversity maintenance.
However, vegetation that becomes near the new margins created by dams are located on hillsides (Truffer et al. 2003; Vale et al. 2013), without species associated with high water saturation, in other words, with different species composition (Acker et al. 2003) and with different traits compared to typical riparian vegetation. Terrain with steep slopes facilitates water flow and reduces water infiltration into the soil (Sidle et al. 2006). Moreover, hills have rocky soil that makes water retention even more difficult. Due to these conditions, species of these environments show adaptation to reduce water loss due to water-stressed conditions, such as loss of leaves during dry season and fruits and seeds with low water content (Murphy and Lugo 1986), and tend to have higher wood density to prevent drought-induced embolism (Choat et al. 2003). Thus, it is not only difficult to prevent the consequences of proximity to the water line on these drought-adapted species, but it is also uncertain if the “new riparian vegetation” will provide the ecological functions of a typical riparian vegetation.
Many dams have been built and will continue to be built, and therefore, understanding vegetation changes after damming is crucial for better conservation and future management actions. Thus, we monitored three seasonal forests that were subjected to the impact of a hydroelectric dam to answer the following questions: Which were the species best adapted to new conditions imposed by the dam? Which were negatively affected? Which new species were established? Were there any local extinctions? Finally, would the “new-riparian vegetation” maintain the ecological roles performed by typical riparian vegetation? Our hypothesis was that a few years after impoundment, species richness and diversity increase because the increased supply of water would favor the establishment of water-associated species. On the other hand, it would cause the mortality of many tree species commonly found in forests with a well-established dry season.
This study was conducted in three dry forests (18°47′40″ S, 48°08′57″ W; 18°40′31″ S, 42°24′30″ W; and 18°39′13″ S, 48°25′04 W; Fig. 1) located in the Amador Aguiar Complex Dam (two dams located on the Araguari River, with reservoir depths of 52 and 55 m). All areas had a sloped terrain, but the deciduous forest inclinations were much more pronounced compared to the semideciduous forest (in some plots, the inclination was over 30°). The first dam (Amador Aguiar Dam I, henceforth AD1) finished flooding in 2005 and has an elevation of 624 m (relative to sea level) and the second dam (Amador Aguiar Dam II, henceforth AD2) ended flooding in 2006 and has an elevation of 565 m (relative to sea level, more information in Vale et al. 2013)
Three dry seasonal forests (two deciduous and one semideciduous forest), which before damming were at least 200 m from any water source, now had the riverbank at its edge after damming since 2005 (AD1)/2006 (AD2). The dam water flow was constant, and thus, the water flow did not vary over seasons and over years. Earlier analyses made in three areas confirmed the damming effects of moisture increase in soil at least 15 m from the artificial lake margin created by the dam (Vale et al. 2013). This impact clearly affects the entire community (Vale et al. 2013), and the responses of tree species to river damming were analyzed here. The climate of the study area is Aw according to the Koppen-Geiger classification (Kottek et al. 2006) with a dry winter (April to September) and a rainy summer (October to March), with an average annual temperature of 22 °C and average rainfall of around 1595 mm (Santos and Assunção 2006).
The first inventory (T0) was carried out in 2005 (AD1) and 2006 (AD2). In each forest, 60 permanent plots 20 × 10 m were marked, totaling 1.2 ha by area (total of 3.6 ha sampled). A total of 10 plots (200 m wide) were established where the river reached the maximum flood level after damming, and the remaining plots were established perpendicular to the river margin. Thus, the samples were distributed every 10 m of perpendicular to the river (0 = 10 m, 10 = 20 m, 20–30 m, 30–40 m, 40–50 m, and 50–60 m of distance). All trees with a diameter at breast height (DBH) of 4.77 cm were tagged with aluminum labels. The diameter of the stem was measured at 1.30 m from the ground, and in case of multiple stems, all live tillers were also measured at 1.30 m.
The first inventory was conducted in the T0 period, that is, before damming. The second inventory was made 2 years (T2) and the third, 4 years (T4) after damming. All inventories were carried out at the end of the rainy season (March–April) to standardize the sampling and to avoid dry season influence on the plant stem diameter due to dehydration. All samplings followed the same procedure as the first inventory (more information in Vale et al. 2013). The new individuals that met the inclusion criteria (recruits) were measured and identified. Mortality referred to standing dead trees or fallen trees.
We calculated the Shannon-Weaver diversity index (Shannon 1948) to measure changes in diversity over the three measurement periods (T0–T2–T4). We applied the Hutcheson t test (Hutcheson 1970) to compare the richness between T0–T2, T2–T4, and T0–T4 period in all forests. Moreover, we estimated the richness based on second-order Jackknife (Colwell 2005). This estimator was considered by Colwell and Coddington (1994) as one of the best predictors of richness.
Dynamic rates analysis
Each species was evaluated regarding dynamic rates in the T0–T2, T2–T4, and T0–T4 periods: mortality, recruitment, outgrowth, and ingrowth rates (we focused on species with at least 20 individuals, but all results are in the Additional file 1). Mortality (M) and recruitment (R) were calculated in terms of annual exponential rates (formulas in Sheil et al. 1995; Sheil et al. 2000). The outgrowth annual rates (O) refer to basal area of dead trees plus dead branches and basal area of living trees (decrement) and ingrowth annual rates (I) refer to basal area of recruits plus growth in basal area of surviving trees (increment). To evaluate changes in the forest, we determined turnover rates for individuals and basal area through mortality-recruitment rates and outgrowth-ingrowth rates (Oliveira-Filho et al. 2007).
Floristic and dynamic—general changes in species
In all three dry forests, the diversity index increased in the first 2 years and for the second year to the fourth year of measurement (Table 1). In DF1, the T2–T4 period showed more diversity changes than did T0–T2 (significant, see Table 1). Nevertheless, in DF2, the greatest diversity increase occurred in the T0–T2 period (significant, see Table 1). In SF the same difference in T0–T2 and T2–T4 was noted, but not significant in either period. The 4-year effect of damming on forests was the most notable. Comparing the T0–T4 period for all dry forest investigated, the Shannon diversity index increased significantly (Table 1), confirming the positive influence of soil moisture on richness and diversity.
The second-order Jackknife richness estimator predicted 62 in deciduous forest 1, 81 in the deciduous forest 2 and 131 species in the semideciduous forest per hectare sampled in T0 (Fig. 2a-c). However, the estimated richness after impoundment increased for both deciduous forests in only 2 years of impact (62 to 74 in deciduous 1 and 81 to 91 in deciduous 2). For deciduous 1, richness continued to increase and reached 85 species per hectare (Fig. 2a), but in deciduous 2, richness stabilized at 91 (Fig. 2b). Otherwise, there were no strong variations in the semideciduous forest after dam construction and the richness values determined for T2 and T4 were similar (181 and 182, respectively—Fig. 2c).
In deciduous forest 1 (DF1), the new species found in T2 were Aspidosperma subincanum Mart. ex A. DC., Guapira areolata (Heimerl) Lundell, Guarea guidonia (L.) Sleumer, Luehea grandiflora Mart., Siparuna guianensis Aubl., Trema micrantha (L.) Blume, and Xylopia aromatica (Lam.) Mart., and in the T4, they were Inga vera Kunth, Jacaranda caroba, Margaritaria nobilis (Vell.) A. DC., Myrsine umbellata Mart., Trichilia elegans A. Juss., Xylopia brasiliensis Spreng., and Tocoyena formosa (Cham. & Schltdl.) K. Schum. This last species was found in T0, but only as one tree, which died. However, two recruits were sampled in T4. Another species, Sterculia striata, was not found in T2 and T4.
In deciduous forest 2 (DF2), the new species found in T2 were Cedrela fissilis Vell., Eugenia florida DC., Genipa americana L., G. guidonia, L. grandiflora, Nectandra cissiflora Nees, Terminalia glabrescens Mart., Trichilia catigua A. Juss., T. elegans, Trichilia pallida Sw., and Zanthoxylum rhoifolium Lam. In T4, only two new species were found, i.e., Ceiba speciosa (A. St.-Hil.) Ravenna and Matayba guianensis Aubl. Otherwise, two species were not found, one in T2 (Aegiphila sellowiana Cham.) and another in the T4 period (Hymenaea courbaril L.).
The semideciduous forest (SF), however, changed little in richness. The new species sampled in T2 were Albizia niopoides (Spruce ex Benth.) Burkart, Heteropterys byrsonimifolia A. Juss., Machaerium hirtum (Vell.) Stellfeld, Psidium rufum DC., and Terminalia phaeocarpa Eichler, and in T4, they were Hirtella gracilipes (Hook. f.) Prance and Cecropia pachystachya Trécul. Otherwise, three species were not found in the T2 period, namely Dilodendron bipinnatum Radlk., Bauhinia rufa (Bong.) Steud., and Byrsonima laxiflora Griseb.
When we considered the occurrence of new species according to distance from the reservoir, it was notable that damming caused a rise in richness. Of the 34 new species (accounting for all forests), 28 were collected near the shore (0–30 m) and only 6 were not found in these patches, indicating a dam influence on the establishment of new species. Furthermore, these 6 species were located only far from the shore (30–60 m).
The dynamic rates confirmed the damming effects on the tree community, especially in the first 2 years exemplified for those species with 20 or more individuals. In this period, 7 of 10 species showed a dynamic rate of more than 10% per year (Table 2), values considered extremely high. In DF2, the same was observed for 17 species (20 or more individuals), where 15 had a dynamic rate of more than 10% per year (Table 3). However, the semideciduous species were more stable regarding species dynamics in the T0–T2 period. Only 5 of 20 species with more than 20 trees had a dynamic rate greater than 10% per year (Table 4).
These high dynamic rates in the first 2 years did not follow the same pattern in subsequent years. For the same species analyzed in the T0–T2 period, only one in DF1, four in DF2, and three in SF had a dynamic rate over 10% per year. This contrast in dynamic rates between T0–T2 and T2–T4 illustrate the damming effects on the entire community. Many species showed greater rates of mortality, recruitment, outgrowth, and ingrowth compared to communities (Fig. 3), and thus, the dam construction impact was substantially more intense in the first 2 years. These effects were more severe for both deciduous forests (Fig. 3) because more species displayed higher rates than did the community (and the community’s rates were very high—Tables 2, 3, and 4). If we analyzed the entire period (T0–T4), the results did not seem significant, because only five species in DF1, nine in DF2, and four in SF had a dynamic rate greater than 10% per year. This result masked the true, marked changes that occurred in all forests, especially in the two deciduous forests, and therefore, monitoring every 2 years was essential for understanding damming effects (and consequently soil moisture increase) on dry seasonal forests.
Richness and diversity increase
A surprising finding of this study was the quick changes in the richness and diversity in the three dry seasonal forests in communities, mainly on deciduous forests. According to the richness estimator, an increase in forest richness by 10 species per hectare could be seen in only 2 years of damming, a great increase considering that we only included trees at least 5 cm in diameter. Many studies on impacted forests have demonstrated structural changes a few years after great disturbances such as storms (Laurance et al. 2006; Pascarella et al. 2004), fragmentation due to edge effects (Laurance et al. 2006), logging (Guariguata et al. 2008), and severe dry periods (Chazdon et al. 2005), but still with forest recovery of structure and composition over the years (Chazdon et al. 2007). In general, only long-term studies have shown changes in tree species and their probable consequences for the community (Laurance et al. 2006), because trees could be long-lived and because changes resulting from disturbances would be gradual. This rapid increase in richness and diversity found for all dry forests analyzed support the hypothesis of great changes caused by dam construction, even in the tree community. The main factor was the increase in the amount of water available, a barrier for many species to growth in the dry season before the damming (Vale et al. 2013). With water available in dry periods after impoundment (Vale et al. 2013), there was no water restriction and more plant species could grow enough to meet the inclusion criteria.
Most of the new species recruited probably were already present in the community as small individuals or saplings with deficiency in growth due to water stress. Summarizing these new species sampled, at least 20 are water-associated species, found in non-Amazonian riparian forests (Rodrigues and Nave 2000), humid Atlantic Forest (Oliveira and Fontes 2000), or even distributed in wet environments of riparian (gallery) forests (Oliveira-Filho and Ratter 2002) or flooded forests (Silva et al. 2007). Hence, prolonged dry periods could act as a negative filter for these species in the original dry conditions, killing them or at least hindering their establishment. The rise in soil moisture due to dam construction (Gusson et al. 2011) breaks the marked seasonality of soil moisture for these forests, favoring the establishment of water-associated species.
Is important to note that the new conditions created by damming are not transitory. Thus, other tree species can be established in this community over the years, and the community will never return to its original state. Germination is influenced by water (Breshears et al. 1998), and some species would have better conditions to stabilize. Even fruits and seeds dispersed from other areas should also increase species richness. The short period of monitoring and the inclusion criteria (only trees five or more centimeters in diameter were sampled) make it difficult to make these affirmations about germination and dispersion influences on richness. However, a regeneration work in these areas shows distinct seedling and sapling responses of the two most important species in these forests (Anadenanthera colubrina and Myracrodruon urundeuva) demonstrated that M. urundeuva had a more negative response than did A. colubrina to increased soil water (Gusson et al. 2011), verifying the effects on germination. Moreover, other dam studies have compared free-flowing rivers with regulated rivers and have shown some positive effects of damming on plant richness due to dispersion (Jansson et al. 2000) and germination (Andersson et al. 2000). The rise in richness and diversity should be treated with caution. The increase in richness and diversity will never overcome the loss of species drowned by the damming. This increase in richness may be treated as one more impact of dams on the flora.
Studies in temperate environments affected by dams have found species changes (Jansson et al. 2000; Nilsson et al. 2002) but have concluded that both richness and diversity are not the most sensitive indicators of effects of flow regulation (Dynesius et al. 2004). Our results, however, suggest high modification in both richness and diversity after only 4 years of impact. The impacts on the species pool were probably high due the high biodiversity in tropical environments, and several shifts in species should be expected in any overflow in the tropics. This is a key problem because the most diverse tropical systems are affected by dams (Nilsson et al. 2005) and this represent a high risk to biodiversity because all forests in the tropics that are subjected to similar flooding after damming tend to show high species chances.
It is difficult to imagine how damming affects forest communities all over the world, but the changes shown here point to a dramatic scenario with huge modifications. Moreover, the damming influence on recruitment of water-associated species was strongest in patches near the river (0–30 m from the shore), which was twice that of those sampled farther from shore (30–60 m). Thus, damming effects on the community and on some species especially have been concentrated near the reservoir (Vale et al. 2013), just the main area for conservation efforts through ecological services such as with regard to soil protection against erosion and siltation (Guo et al. 2007; Hubble et al. 2010), aquatic fauna and corridor for fauna movements (Gundersen et al. 2010), and pathways for plant dispersion (Naiman and Decamps 1997; Nilsson and Berggren 2000). These areas that become situated close to the artificial lakeshore showed high impact and should be monitored for several years, so that we learn more about the many implications for ecosystems.
Water restriction is a common event for seasonal environments, but it is harsher for deciduous than semideciduous forests. The mountainous terrains with high slope and rocky soils in deciduous forests (Oliveira-Filho and Ratter 2002) facilitate the water flow in rainy periods and hinder water infiltration (Baker et al. 2002). In the semideciduous forest, water stress is less intense due to more clayey soils and less sloped terrain, and hence, fewer new species are found.
Due to being in a more water-stressed environment, deciduous forests have stronger deciduousness during dry season than do semideciduous forests. With dam construction, the proximity of the forest to the water table increases subsoil water reserves, which is the ecophysiological basis for evergreen maintenance (Borchert 1998; Nepstad et al. 1994). Therefore, in the deciduous forest, the environment during the dry season becomes milder, facilitating the growth of evergreen species (most new species collected were evergreen). Evergreen species have more advantages when the environment is not water deficient. Deciduous species have more photosynthetic capacity (Reich et al. 2003) but lose part of the carbon acquired due to leaf fall. On the other hand, evergreen species do not lose much carbon during the dry season and are therefore photosynthetically active during the dry season (Chabot and Hicks 1982). In general, evergreen species have deep roots with more secondary and lateral roots (Markesteijn et al. 2010), and thus, it would be difficult to maintain root biomass with less carbon gain during dry seasons (Wright and Vanschaik 1994).
However, with water supply all over the year, photosynthesis had no limiter and evergreen plants could present high growth rates. Thus, in the long-term we expected a conversion of physiognomies near the riverbed, of an original deciduous forest to a more evergreen environment (a semideciduous forest, but still with marked deciduousness due long-lived deciduous trees). What about new deciduous species found? Of all these “new-deciduous species,” only three showed intermediate- to high-density wood (greater than 0.65 g.cm−3). Deciduous trees with lower wood density are more vulnerable to drought-induced embolism and cavitation (Choat et al. 2003; Choat et al. 2005), and thus, intense dry periods tend to be more negative for low-density wood deciduous plants (Markesteijn et al. 2010). With rise in soil moisture, the risk of low water problems for sap transportation is reduced; plant fitness and survival in the new conditions are favored. Thus, low-density wood was favored.
The rainfall regime and groundwater depths strongly influence species composition, community structure, and biological diversity (Ehleringer and Dawson 1992; Munoz-Reinoso 2001; Naiman and Decamps 1997), and a water-stressed environment can raise the richness and diversity of trees after water availability changes (Xu et al. 2009). On a global scale, humid forests have more biodiversity (Gaston 2000) in places without energy restriction, such as the tropics (O’Brien et al. 2000). Considering that the energy in these systems did not vary in the forests studied, the clear factor that enhanced richness was the change of a “common dry forest” to an “artificial riparian dry forest” due to increased soil moisture (the so-called “Riparian Effect”).
Riparian forests are a transition zone between land and aquatic systems and support more plant richness than surrounding areas (Naiman and Decamps 1997; Nilsson and Berggren 2000) because they have flora associated with humid and dryer patches. The increase in richness and diversity, however, will not mean a “total” conversion of these dry forests into a typical riparian forest due to the maintenance of most of the species in the community, and a few species are lost. Riparian forests are species richness systems (Rodrigues and Nave 2000), due to the high heterogeneity, such as floods (Lopes and Schiavini 2007), distinct water flow (Jansson et al. 2000), and great soil moisture variations (Rodrigues et al. 2010), and despite that new species appeared, some characteristics of the original forest were constant.
Some of these “heterogeneity creators” in a natural riparian forest did not occur in the three forests analyzed here. First, floods did not occur because dam flow was controlled by an upstream dam, and thus the water table did not vary and soil moisture near stream would have few changes over subsequent years. Flood frequency and variations in water table depth increases habitat complexity (Naiman and Decamps 1997), creating conditions for the growth of different species (Lopes and Schiavini 2007). Second, the new artificial lake created did not have a water current, and thus the sediment and seed deposition from upstream plants would not occur. Flow regime influences species composition and distribution on a small scale (Bendix and Hupp 2000; Hughes and Rood 2003), because many seeds are dispersed by hydrochory (Jansson et al. 2000) and because soil deposition creates patches with distinct soil infiltration and nutrients (Rodrigues et al. 2010), increasing patches with environmental heterogeneity for the establishment of different species. Therefore, free-flowing rivers have more species-richness than regulated ones after long periods (Dynesius et al. 2004; Nilsson et al. 1997).
This “Riparian Effect” occurred in the three dry forests studied, not only enhancing richness and diversity but also leading to a marked exchange of individuals, due to high mortality and recruitment. There was little change in the total number of individuals, due to a balance in recruitment and death of trees; however, because of an imbalance after damming occurred, some species showed high recruitment rates and low mortality against species that were very negatively affected, with mortality rates higher than recruitment. Even a little soil moisture change may induce vegetation changes (Nilsson and Svedmark 2002), and increased soil water then causes different responses in species. On the one hand, water could kill roots by oxygen stress and consequently anoxia (Vartapetian and Jackson 1997; White 2007) and upland plants usually are intolerant to a riparian environment (Johnson 1994; Nilsson and Berggren 2000). On the other hand, it can break the intense dry seasonal period and enhance plant growth.
The scenario was an unstable period with intense tree changes and several consequences for species a few years after damming. Unstable periods occur after strong perturbation and some conclusions were difficult to make, but it is clear that most important species did not respond equally and those very negatively affected should not be used in the management of areas with similar impacts.
Times of record assessment
Not only were there a range of responses according to forest type (species in semideciduous forest responded less intensely to the effects of the dam), since the responses were concentrated in the first 2 years after impoundment, but also most species showed fewer changes in the T2–T4 period in all three forests, demonstrating that damming impacts tended to stabilize a few years after impoundment. Works with other taxa, such as macrophytes and insects, after damming (Fearnside 2005; Moura Júnior et al. 2011; Patz et al. 2000) have demonstrated that intense changes occur after dam construction but tend to stabilize over years (Lima et al. 2002). Even abiotic changes, such as carbon emission, were concentrated in the first years after damming (Fearnside 2002).
Hence, the analysis every 2 years was necessary and satisfactory in representing the scenario after damming, with marked changes just after impact followed by a stabilization. The damming effect for many species would have been masked, if analyzed only in the T0–T4 period because rates in T2–T4 were three to more than 10 times smaller for these species. Moreover, we avoided error associated with tree hydration when measurements are made in different seasons of the years (Phillips et al. 2004), because all measurements (T0, T2, and T4) were carried out at the end of the rainy season (March–April), increasing the reliability of the results. The measurement interval also affects dynamic rates (Phillips et al. 2004), but the rate differences for species between T0–T2 and T0–T4 were too high (frequently more than 5% per year) to infer that the 2 or 4 years caused such great effects on rates. Finally, the rates could be influenced by the number of individuals for a species, and according to our criteria for analysis, we considered only species with 20 or more individuals to minimize this problem, resulting in many general tendencies.
With dam construction, the proximity of the forest to the water table increases subsoil water reserves and break the intense dry seasonal period. This disturbance is permanent, and is transforming the dry forests into an artificial riparian forest, however with less species and heterogeneity than it. We named this process of "Riparian Effect". This Riparian Effect is reorganizing forest structure with establishment of many water-associated species and increase of the forest basal area because the water supply improves tree growth, increasing basal area in many trees, which become thicker. Dams creates a permanent landscape alteration and changes in these forests will occur for several years and are more notable in the first two years after damming, however the changes in deciduous forest are more remarkable than in semideciduous forests. This paper helps understanding the impacts of damming on seasonal forests. Undoubtedly, not all changes in these forests can be documented, but it’s clear that the damming impacts are very significant and deserve further study.
Acker SA, Gregory S, Lienkaemper G, McKee WA, Swanson FJ, Miller SD (2003) Composition, complexity, and tree mortality in riparian forests in the central Western Cascades of Oregon. Forest Ecol Manag 173:293–308
Andersson E, Nilsson C, Johansson ME (2000) Effects of river fragmentation on plant dispersal and riparian flora. Regul Rivers 16:83–9
Baker TR, Affum-Baffoe K, Burslem D, Swaine MD (2002) Phenological differences in tree water use and the timing of tropical forest inventories: conclusions from patterns of dry season diameter change. Forest Ecol Manag 171:261–74
Barth JAC, Cronin AA, Dunlop J, Kalin RM (2003) Influence of carbonates on the riverine carbon cycle in an anthropogenically dominated catchment basin: evidence from major elements and stable carbon isotopes in the Lagan River (N. Ireland). Chem Geol 200:203–16
Baxter RM (1977) Environmental effects of dams and impoundments. Annu Rev Ecol Syst 8:255–83
Bendix J, Hupp CR (2000) Hydrological and geomorphological impacts on riparian plant communities. Hydrol Process 14:2977–90
Borchert R (1998) Responses of tropical trees to rainfall seasonality and its long-term changes. Clim Change 39:381–93
Brandao RA, Araujo AFB (2008) Changes in anuran species richness and abundance resulting from hydroelectric dam flooding in Central Brazil. Biokhimiya 40:263–6
Breshears DD, Nyhan JW, Heil CE, Wilcox BP (1998) Effects of woody plants on microclimate in a semiarid woodland: Soil temperature and evaporation in canopy and intercanopy patches. Int J Plant Sci 159:1010–7
Chabot BF, Hicks DJ (1982) The ecology of leaf spans. Annu Rev Ecol Syst 13:229–59
Chazdon RL, Brenes AR, Alvarado BV (2005) Effects of climate and stand age on annual tree dynamics in tropical second-growth rain forests. Ecology 86:1808–15
Chazdon RL, Letcher SG, van Breugel M, Martinez-Ramos M, Bongers F, Finegan B (2007) Rates of change in tree communities of secondary Neotropical forests following major disturbances. Philos T Roy Soc B 362:273–89
Choat B, Ball M, Luly J, Holtum J (2003) Pit membrane porosity and water stress-induced cavitation in four co-existing dry rainforest tree species. Plant Physiol 131:41–8
Choat B, Ball MC, Luly JG, Holtum JAM (2005) Hydraulic architecture of deciduous and evergreen dry rainforest tree species from north-eastern Australia. Trees-Struc Func 19:305–11
Colwell RK (2005) EstimateS: statistical estimation of species richness and shared species form samples. Versão 7:5
Colwell RK, Coddington JA (1994) Estimating the extent of terrestrial biodiversity through extrapolation. Philos T Roy Soc B 345:101–18
Dynesius M, Nilsson C (1994) Fragmentation and flow regulation of river systems in the northern 3rd of the world. Science 266:753–62
Dynesius M, Jansson R, Johansson ME, Nilsson C (2004) Intercontinental similarities in riparian-plant diversity and sensitivity to river regulation. Ecol Appl 14:173–91
Ehleringer JR, Dawson TE (1992) Water-uptake by plants - perspectives from stable isotope composition. Plant, Cell Environ 15:1073–82
Evans A, Strezov V, Evans TJ (2009) Assessment of sustainability indicators for renewable energy technologies. Renew Sust Energ Rev 13:1082–8
Fearnside PM (2002) Greenhouse gas emissions from a hydroelectric reservoir (Brazil's Tucurui Dam) and the energy policy implications. Water Air Soil Poll 133:69–96
Fearnside PM (2005) Brazil's Samuel Dam: Lessons for hydroelectric development policy and the environment in Amazonia. Environ Manage 35:1–19
Gaston KJ (2000) Global patterns in biodiversity. Nature 405:220–227
Guariguata MR, Cronkleton P, Shanley P, Taylor PL (2008) The compatibility of timber and non-timber forest product extraction and management. Forest Ecol Manag 256:1477–81
Gundersen P, Lauren A, Finer L, Ring E, Koivusalo H, Saetersdal M, Weslien JO, Sigurdsson BD, Hogbom L, Laine J, Hansen K (2010) Environmental services provided from riparian forests in the Nordic countries. Ambio 39:555–66
Guo ZW, Li YM, Xiao XM, Zhang L, Gan YL (2007) Hydroelectricity production and forest conservation in watersheds. Ecol Appl 17:1557–62
Gusson AE, Vale VS, Oliveira AP, Lopes SF, Dias Neto OC, Araújo GM, Schiavini I (2011) Interferência do aumento de umidade do solo nas populações de Myracrodruon urundeuva Allemão e Anadenanthera colubrina (Vell.) Brenan em reservatórios artificiais de Usinas Hidrelétricas. Sci Florestalis 39:35–41
Hubble TCT, Docker BB, Rutherfurd ID (2010) The role of riparian trees in maintaining riverbank stability: a review of Australian experience and practice. Ecol Eng 36:292–304
Hughes FMR, Rood SB (2003) Allocation of river flows for restoration of floodplain forest ecosystems: a review of approaches and their applicability in Europe. Environ Manag 32:12–33
Humborg C, Ittekkot V, Cociasu A, VonBodungen B (1997) Effect of Danube River dam on Black Sea biogeochemistry and ecosystem structure. Nature 386:385–8
Hutcheson K (1970) A test for comparing diversities based on Shannon formula. J Theor Biol 29:151–4
Jansson R, Nilsson C, Dynesius M, Andersson E (2000) Effects of river regulation on river-margin vegetation: a comparison of eight boreal rivers. Ecol Appl 10:203–24
Jansson R, Zinko U, Merritt DM, Nilsson C (2005) Hydrochory increases riparian plant species richness: a comparison between a free-flowing and a regulated river. J Ecol 93:1094–103
Johnson WC (1994) Woodland expansion in the Platte River, Nebraska - patterns and causes. Ecol Monogr 64:45–84
Kaygusuz K (2004) Hydropower and the world's energy future. Energ Source 26:215–24
Kottek M, Grieser J, Beck C, Rudolf B, Rubel F (2006) World Map of the Köppen-Geiger climate classification updated. Meteorol Z 15(3):259–63
Laurance WF, Nascimento HEM, Laurance SG, Andrade A, Ribeiro JELS, Giraldo JP, Lovejoy TE, Condit R, Chave J, Harms KE, D'Angelo S (2006) Rapid decay of tree-community composition in Amazonian forest fragments. Proc Natl Acad Sci U S A 103:19010–4
Lima IBT, Victoria RL, Novo EMLM, Feigl BJ, Ballester MVR, Ometto JP (2002) Methane, carbon dioxide and nitrous oxide emissions from two Amazonian reservoirs during high water table. Verhandlungen I Vereinigung Limnol 28:438–42
Lopes SF, Schiavini I (2007) Dinâmica da comunidade arbórea de mata de galeria da Estação Ecológica do Panga, Minas Gerais, Brasil. Acta Bot Bras 21:249–61
Manyari WV, Carvalho OA (2007) Environmental considerations in energy planning for the Amazon region: downstream effects of dams. Energ Policy 35:6526–34
Markesteijn L, Iraipi J, Bongers F, Poorter L (2010) Seasonal variation in soil and plant water potentials in a Bolivian tropical moist and dry forest. J Trop Ecol 26:497–508
Moura Júnior EG, Abreu MC, Severi W, Lira GAST (2011) O gradiente rio-barragem do reservatório de Sobradinho afeta a composição florística, riqueza e formas biológicas das macrófitas aquáticas? Rodriguésia 62:731–42
Munoz-Reinoso JC (2001) Vegetation changes and groundwater abstraction in SW Donana, Spain. J Hydrol 242:197–209
Murphy PG, Lugo AE (1986) Ecology of tropical dry forest. Annu Rev Ecol Syst 17:67–88
Naiman RJ, Decamps H (1997) The ecology of interfaces: riparian zones. Annu Rev Ecol Syst 28:621–58
Nepstad DC, Decarvalho CR, Davidson EA, Jipp PH, Lefebvre PA, Negreiros GH, Dasilva ED, Stone TA, Trumbore SE, Vieira S (1994) The role of deep roots in the hydrological and carbon cycles of Amazonian forests and pastures. Nature 372:666–9
Nilsson C, Berggren K (2000) Alterations of riparian ecosystems caused by river regulation. Bioscience 50:783–92
Nilsson C, Svedmark M (2002) Basic principles and ecological consequences of changing water regimes: riparian plant communities. Environ Manag 30:468–80
Nilsson C, Andersson E, Merritt DM, Johansson ME (2002) Differences in riparian flora between riverbanks and river lakeshores explained by dispersal traits. Ecology 83:2878–87
Nilsson C, Jansson R, Zinko U (1997) Long-term responses of river-margin vegetation to water-level regulation. Science 276:798–800
Nilsson C, Reidy CA, Dynesius M, Revenga C (2005) Fragmentation and flow regulation of the world's large river systems. Science 308:405–8
O'Brien EM, Field R, Whittaker RJ (2000) Climatic gradients in woody plant (tree and shrub) diversity: water-energy dynamics, residual variation, and topography. Oikos 89:588–600
Oliveira AT, Fontes MAL (2000) Patterns of floristic differentiation among Atlantic forests in southeastern Brazil and the influence of climate. Biokhimiya 32:793–810
Oliveira-Filho AT, Ratter JA (2002) Vegetation physiognomies and woody flora of the Cerrado Biome. In: Oliveira PS, Marquis RJ (eds) The Cerrados of Brazil. Columbia University Press, New York, pp 91–120
Oliveira-Filho AT, Carvalho WAC, Machado ELM, Higuchi P, Appolinário V, Castro GC, Silva AC, Santos RM, Borges LF, Corrêa BS, Alves JM (2007) Dinâmica da comunidade e populações arbóreas da borda e interior de um remanescente florestal na Serra da Mantiqueira, Minas Gerais, em um intervalo de cinco anos (1999–2004). Rev Bras Bot 30:149–61
Palmer GC, Bennett AF (2006) Riparian zones provide for distinct bird assemblages in forest mosaics of south-east Australia. Biol Conserv 130:447–57
Pascarella JB, Aide TM, Zimmerman JK (2004) Short-term response of secondary forests to hurricane disturbance in Puerto Rico, USA. Forest Ecol Manag 199:379–93
Patz JA, Graczyk TK, Geller N, Vittor AY (2000) Effects of environmental change on emerging parasitic diseases. Int J Parasitol 30:1395–405
Phillips OL, Baker TR, Arroyo L, Higuchi N, Killeen TJ, Laurance WF, Lewis SL, Lloyd J, Malhi Y, Monteagudo A, Neill DA, Vargas PN, Silva JNM, Terborgh J, Martinez RV, Alexiades M, Almeida S, Brown S, Chave J, Comiskey JA, Czimczik CI, Di Fiore A, Erwin T, Kuebler C, Laurance SG, Nascimento HEM, Olivier J, Palacios W, Patino S, Pitman NCA, Quesada CA, Salidas M, Lezama AT, Vinceti B (2004) Pattern and process in Amazon tree turnover, 1976–2001. Philos T Roy Soc B 359:381–407
Reich PB, Wright IJ, Cavender-Bares J, Craine JM, Oleksyn J, Westoby M, Walters MB (2003) The evolution of plant functional variation: Traits, spectra, and strategies. Int J Plant Sci 164:S143–64
Rodrigues RR, Nave AG (2000) Heterogeneidade florística das matas ciliares. In: Rodrigues R. R. and Leitão-Filho H. F. (eds), Matas ciliares: conservação e recuperação, São Paulo, SP., 45–71.
Rodrigues VHP, Lopes SF, Araújo GM, Schiavini I (2010) Composição, estrutura e aspéctos ecológicos da floresta ciliar do rio Araguari no Triângulo Mineiro. Hoehnea 37:87–105
Santos ER, Assunção WL (2006) Distribuição espacial das chuvas na microbacia do Córrego do Amanhece, Araguari - MG. Caminhos da Geografia 6:41–55
Shannon CE (1948) A mathematical theory of communication. AT&T Tech J 27:379–423
Sheil D, Burslem D, Alder D (1995) The interpretation and misinterpretation of mortality-rate measures. J Ecol 83:331–3
Sheil D, Jennings S, Savill P (2000) Long-term permanent plot observations of vegetation dynamics in Budongo, a Ugandan rain forest. J Trop Ecol 16:765–800
Sidle RC, Ziegler AD, Negishi JN, Nik AR, Siew R, Turkelboom F (2006) Erosion processes in steep terrain - Truths, myths, and uncertainties related to forest management in Southeast Asia. Forest Ecol Manag 224:199–225
Silva AC, Berg EVD, Higuchi P, Oliveira-Filho AT (2007) Comparação florística de florestas inundáveis das regiões Sudeste e Sul do Brasil. Rev Bras Bot 30:257–69
St Louis VL, Kelly CA, Duchemin E, Rudd JWM, Rosenberg DM (2000) Reservoir surfaces as sources of greenhouse gases to the atmosphere: a global estimate. Biokhimiya 50:766–75
Truffer B, Bratrich C, Markard J, Peter A, Wuest A, Wehrli B (2003) Green Hydropower: the contribution of aquatic science research to the promotion of sustainable electricity. Aquat Sci 65:99–110
Vale VS, Schiavini I, Araújo GM, Gusson AE, Lopes SF, Oliveira AP, Prado-Júnior JA, Arantes CS, Dias-Neto OC (2013) Fast changes in seasonal forest communities due to soil moisture increase after damming. International J Trop Biol 61:1901–17
Vartapetian BB, Jackson MB (1997) Plant adaptations to anaerobic stress. Ann Bot 79:3–20
Vorosmarty CJ, Meybeck M, Fekete B, Sharma K, Green P, Syvitski JPM (2003) Anthropogenic sediment retention: major global impact from registered river impoundments. Global Planet Change 39:169–90
White TCR (2007) Flooded forests: death by drowning, not herbivory. J Veg Sci 18:147–8
Wright SJ, Vanschaik CP (1994) Light and the phenology of tropical trees. Am Nat 143:192–9
Xu H, Ye M, Li J (2009) The ecological characteristics of the riparian vegetation affected by river overflowing disturbance in the lower Tarim River. Environ Geol 58:1749–55
The authors thank the Foundation for Research Support of the State of Minas Gerais (FAPEMIG), CAPES (Coordination for the Development of Higher Education Personnel, Process 2498/09-0), and PACCSS-FAPEMIG (Process CRA-30058-12) for financial support for financial support. Dr. A. Leyva helped with English editing of the manuscript.
The authors declare that they have no competing interests.
VSV wrote the manuscript. VSV and IS participated in the design of the study. VSV, IS, JAPJ, APO and AEG participated in the field works and revised the manuscript. JAPJ corrected the English language. VSV, JAPJ, APO and AEG made the statistical analysis. VSV, statistical analyzes and wrote the manuscript.
Tree species parameters and dynamic rates to three Dry Forests (Deciduous Forest 1 – DF1, Deciduous Forest 2 – DF2 and Semideciduous Forest SF) in southearn Brazil. T0 = before dam construction, T2 = two years after damming, T4 = four years after damming, M = mortality, R = recruitment, O = outgrowth, I = ingrowth. Only species with less than 20 individuals are shown. | <urn:uuid:e7362c87-4988-4884-8b93-cd4b69c67ea2> | CC-MAIN-2019-43 | https://revchilhistnat.biomedcentral.com/articles/10.1186/s40693-015-0043-5 | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00413.warc.gz | en | 0.910985 | 11,201 | 3.28125 | 3 |
Definition of neutral zone
: the portion of an ice hockey rink between the attacking and defensive zones
First Known Use of neutral zone
Learn More about neutral zone
Seen and Heard
What made you want to look up neutral zone? Please tell us where you read or heard it (including the quote, if possible). | <urn:uuid:15061569-974b-4246-981f-f27fa12ef553> | CC-MAIN-2017-13 | https://www.merriam-webster.com/dictionary/neutral%20zone | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188623.98/warc/CC-MAIN-20170322212948-00319-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.905077 | 65 | 2.765625 | 3 |
Aristotle was a famous Greek philosopher who contributed his services to the field of politics, ethics, and psychology. This biography book tells the ancient wisdom, history, and legacy of the great Aristotle. Thus, people can take an insight into the life of a great philosopher. Students and researchers should consider reading Aristotle's biography to learn about his notable works. Our online books order website brings this notable Philosopher Biography to our place for Amazon books online shopping in Pakistan. Our online books bazaar makes it super reliable for bibliophiles to buy historical biographies books online in Pakistan.
· Aristotle's biography is launched by the united library, which provides only authentic information to give correct detail and information.
· It contains the entire history of Aristotle that where he studied, lived, researched, and taught.
· The mentioned works from his career include politics, metaphysics, poetics, Nicomachean ethics, and prior analytics.
· He was also a teacher of the Great Alexander, so his label makes him a more important person from history.
· The descriptive Aristotle biography is published on date January 22, 2021, and now it is available here.
Booksreading.pk, an online Pakistan bookstore, brings this concise biography of Aristotle to our online books site in Pakistan to let you buy it for your library. You only require placing your online order for this original book in Pakistan, and we will send it to your place. The facility for Amazon online books shopping in Pakistan will make you boost your collection of books by adding must-read books to it. | <urn:uuid:dc63edc5-f308-4318-9a95-d5824d989d73> | CC-MAIN-2022-05 | https://www.booksreading.pk/aristotle-biography-history-book-buy_604976.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00598.warc.gz | en | 0.954853 | 322 | 3.34375 | 3 |
The Truth About Domestic Abuse
- Domestic abuse includes physical, emotional, financial, psychological, and sexual abuse.
- Domestic abuse is totally unacceptable. Everybody has the right to live their life free of violence, abuse, intimidation and fear.
- Domestic abuse is very common. One in four women, and one in six men, experience domestic violence at some point in their life.
- Domestic abuse is very dangerous. Each week in the UK, two women are killed by a partner or ex-partner.
- Domestic abuse is about power and control. Abusive, violent and sexually abusive behaviour is wide-ranging and subtle in what it tries to achieve.
- Domestic abuse is intentional and instrumental behaviour. It is about scaring you into doing something that you don't want to do or out of doing something that you do want to do.
- The abuser is 100% responsible for their abuse. The abuse is their problem and their responsibility.
- It is not your fault. No person deserves to be abused, regardless of what they say or do.
- Perpetrators can change. Their behaviour is within their control and they can stop if they choose.
- You can't change them.
- We can't change them.
- You don't have to put up with it. Everybody has the right to safety and respect, to put themselves and their children first and to focus on their needs.
You can increase your safety. If they are intent on being violent, you will not be able to stop them, but there might be things you can do to increase your safety. | <urn:uuid:d9f69b6d-50b1-4ac0-93d6-58b8f6503272> | CC-MAIN-2018-09 | http://www.redcar-cleveland.gov.uk/information.nsf/Web?ReadForm&id=DA023ABB3A761404802577F2005C0728 | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814857.77/warc/CC-MAIN-20180223213947-20180223233947-00595.warc.gz | en | 0.970961 | 327 | 2.78125 | 3 |
< In Lieu of Closing Remarks >
Since my childhood, I lived in constant touch with nature and described it through haiku to fill the daily pages of my life. My lengthy writing on haiku so far will be well rewarded if it persuaded even one of my readers to realize that haiku is a casual part of his or her routine life, and that he or she can make them.
I support that some of my readers have been under the impression that haiku is something beyond their reach, and that they my develop an allergy against it if pressed too much.
I trust that all of you now understand that haiku is nothing of the sort. Let's try to eliminate misunderstandings one by one and go back to the natural spirit of haiku in our heart. Then nature will have a greater presence in our life and initiate a friendly dialogue with us.
By way of concluding remarks on the historical background of the modern haiku, let me first touch upon Masaoka Shiki, a great figure in the Meiji era(1868〜1912), then Takahama Kyoshi and Kawahigashi Hekigodo(1873〜1937) who studied together under Shiki but later took entirely different paths.
An ardent admirer of Buson whose haiku was characterized by realistic description, Shiki took advantage of his knowledge on the Western painting and emphasized the importance of "sketching" in haiku.
His proposal gained much popularity among the young people who were not happy with the prevailing dull pattern of haiku, and led to the birth of a new haiku. Besides his contributions for haiku innovation, Shiki played equally impressive roles in the Waka field. Unfortunately illness took his life so young at 36 years of age in September, 1902.
Together with these and many other haiku he wrote, Shiki became an unforgettable figure in the development of haiku. Iris greatness is also found in the development of excellent successors. Among them were Kawahigashi Hekigodo and Takahama Kyoshi.
Born close in age in Matsuyama, the native place of Shiki, the two studied in the same class at a junior high school, and became Shiki's students at about the same time. They were so closely related that they worked together helping Shiki develop haiku, and sometimes competed with each other due to the difference in viewpoint.
After the death of Shiki, the two, among others, took major leadership for development of haiku. Hekigodo took the initial lead, promoted vigorously the teachings of Shiki and his own ideas, thus setting a new wave in the haiku world. He advocated his own theories in denial of the established principles on sketching, season words and 5-7-5 syllables. His activities were appreciated as "New Trend Afovement" and as such contribtlted to the modernization of haiku. However, he gradually indulged in irregular and self-righteous haiku and went off the center of the haiku society.
Kyshi, on the other hand, revived the "Hototogisu" magazine which had been abolished at the first anniversary due to financial reasons. He republished the magazine in Tokyo. In the days of Shiki, Hekigodo used to be active on the the magazine. But in later days, it became the stage mainly for Kyoshi. While Hekigodo pursued the new-trend movement, Kyoshi was bent on writing and introducing stories on the "Hototogisu" magazine. Making public a lot of stories including "I Am A Cat" by Natsume Soseki(1867〜1916). the magazine became a forum for stories rather than haiku.
Hekigodo's new trend movement was about to gain nationwide popularity, when Kyoshi made a comeback to the haiku world. Fe concluded that the movement attempted to ruin the traditional principles thus destroying the traditional haiku. Kyoshi returned to haiku to correct the trend with a most aggressive determination:
Kyoshi consistently emphasized the importance of traditions including the 5-7-5 syllables and the season word, and proposed that we ought to discover new provinces under these traditions. Finally he reached a conclusion that "the objective of haiku is to appreciate nature in the form of poetry."
These two contrasting successors caused the tbachings of Shiki to be handed down the generations to date.
The Hototogisu fostered prominent haiku poets including Shiki, Hekigodo and Kyoshi, and at the same time produced superb works of haiku. It had no break in regular publication to date. The very history of the magazine, with its leadership in the haiku world from Meiji to Taisho to Showa period, may well represent that of the modern haiku.
I really enjoyed the Study of haiku with my readers. Finally let me briefly introduce several haiku events and rules including Kukai(haiku meetings). | <urn:uuid:ce66274a-d78c-41e1-9aff-1b4c68c5079a> | CC-MAIN-2020-16 | http://www.kyoshi.or.jp/inv-haiku/in.htm | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00010.warc.gz | en | 0.97501 | 1,031 | 2.640625 | 3 |
Originating from the Latin root familia, meaning affiliation, a traditional family is comprised of a husband, wife and children. A family is the basic unit of a society. When it comes to knowing about families in their historical perspectives, there is a special branch of study known as genealogy, which traces family lineages through out their histories. Lineages of famous families are well recorded in history, and provide an opportunity to look into the past with a fair degree of clarity. It is not only Royalty, or noble and elite families that became famous in history ,but there have been families through out history, that earned fame by virtue of excellence in sports, earned wealth and net worth, intelligence and numerous other parameters.
1. The Curie Family
The Curie family was one of the most educated and famous families in history, consisting of many distinguished scientists. Marie Curie, born in 1867, was a Polish chemist and physicist and she is the only person in history to have won two Nobel prizes in two different disciplines. Marie Curie was the first female professor at the University of Paris now know as ‘Paris Six’ or ‘Marie Curie University’. She is best known for her work on radiation and she actually died of exposure to radiation in 1934. Marie’s husband, Pierre Curie was an accomplished French physicist, and he too won the Nobel Prize. Irene Curie, was the daughter of Marie and Pierre Curie, and she was also a Nobel Laureate in Physics. She won the Nobel Prize for discovering that an element could be produced in a laboratory. Irene’s Husband, Frederic Joliot-Curie was also a French physicist and a Nobel Prize winner. Pierre Curie’s brother was also a well known physicist. The second daughter of Marie Curie and Pierre Curie, Eve Curie was a famous French American Journalist and writer. She was a war correspondent and worked for UNICEF.
2. The Este, Vicars of Ferrara
The Este family originated from the Roman Atti family, who migrated from Rome to Este to defend Italy against theGoths. The earliest known member of this family was Margrave Adalbert of Mainz, known for his son Oberto I, Count Palatine of Italy. Oberto’s grandson Albert Azzo II, Margrave of Milan built a castle in Este. Albert Azzo II had three sons and two of them led two branches of the family known as ‘The House of Welf’ and ‘The House of Este’. The House of Welf produced the dukes of Bavaria, the dukes of Saxony, the dukes of Brunswick and Luneburg, and even a German King. Under Nicolo d’Este III, Ferrara was a cultural center and Nicolo received many Popes. Pope Eugene IV held a council of historic importance in Ferrara known as, The Council of Florence.
3. The Rothschild Family
The Rothschild Family originated in the late 18th century in Frankfurt, Germany. They were a famous banking family who were believed to possess the largest private fortune in the world in 1800s and is still widely recognized as such today even though the Rothchild family is now much smaller compared to its stature during the 1800 to 1900s. The Rothchild family owns businesses in various fields including: banking, mining, energy, farming and winery and is also well known for its charitable generosity. The family won international fame under the Jewish Mayor, Amschel Rothschild, who was born in 1744 and died in 1812. He kept the fortune within the family through arranged marriages. In 1811 he extended a loan of £5M to the Prussian government, and in 1825 he supplied enough cash to the Bank of England to save it from a market liquidity crisis.
4. The Coppola Family
Francis Ford Coppola was born to Carmine and Italia Coppola on April 7, 1939 in Detroit, Michigan, U.S.A. He was born of Italian Immigrant ancestry as his paternal grandparents had immigrated to USA from Bernalda, Basilicata. Francis Coppola directed great movies such as, The Godfather and Apocalypse Now. His daughter Sophia is known for producing, The Virgin Suicides and Lost in Translation. The Coppola family is sometimes referred to as cinema royalty on account of the family contribution to the movie industry. Exactly how talented the Coppola family is, is event through its cinematic lineage of achievement.. Coppola’s lineal descendants have together, have been nominated for a total of 23 Academy Awards, and won nine of them including Best Picture, Best Actor, Best Adapted Screenplay, Best Original Screenplay, Best Director and Best Original Score.
5. The Kennedy Family
The Kennedy family belongs to the descendants of Irish Americans, Joseph P. Kennedy Sr., and Elizabeth Fitzgerald. It is one of the most influential families in America, sometimes referred to as the Royal Family of America, and known for their contribution to politics as Democrats. . Most of the family members are Harvard educated and have contributed notably in the University’s John F.Kennedy School of Government. The family is also famous for its wealth and its more ‘photogenic’ members. John F.Kennedy was the 35th President of the United States of America and served in the White House until his assassination in 1963. His younger brothers, Robert Kennedy and Edward Kennedy were also U.S. Senators. Alas, dispute all their good fortune and fame, The Kennedy family has been plagued by fatal accidents, illness, and misfortune throughout their entire reign.
6. The Bhutto family
The Bhutto family have played a very dominant role in the Pakistani politics. Zulfikar Ali Bhutto served as the fourth President of Pakistan from 1971 to 1977 and as the 9th Prime Minister of Pakistan from 1973 to 1977. His Daughter, Benazir Bhutto served as the eleventh Prime Minister of Pakistan in two non-consecutive terms, from November 1988 until October 1990, and from 1993 until November 1996. Her Husband, Asif Ali Zardari is the current President of Pakistan and their son Bilawal Bhutto is currently running against the Prime Minister of Pakistan in the coming elections to be held on May 11, 2013.
7. Nehru-Gandhi family
The Nehru-Gandhi family is an Indian political family who has dominated Indian politics since its independance but this family has no ancestral relationship with the founder of India, Mahatma Gandhi. Three family members, Pandit Jawaharlal Nehru, his daughter Indira Gandhi, and her son Rajiv Gandhi, have been Prime Ministers of India. Rajiv’s wife, Sonia Gandi is currently the Head of Congress. Rajiv and Sonia Gandhi’s son, Rahul Gandhi is the youngest family member of the family and is likely to continue the family tradition.
8. Genghis Khan Family
Genghis Khan headed the Khan Family. He was known as the ‘Great Khan’, or ‘The Emperor of Mongolia’, where the people still regard him as the founding father of their nation. His name is synonymous to tyranny in history because of the large scale massacres he committed against the civilian populations during his invasions and the invasions expanded enormously beyond the boundaries of his regime.He unified the nomadic tribes of northern Asia and invaded Kara Khitan, Khanate, Caucasus, the Khawarezmid Empire, Western Xia and the Jin Dynasties until he nominated Ogedwei Khan as his successor.
9. Chauvelin Family
The Chauvelin Family was a famous and influential French family. The earliest known member of the family was Toussaint Chauvelin, who was a public prosecutor. His eldest son François Chauvelin was an Attorney General. The great grandson of Toussaint, Bernard Chauvelin, was born in 1662 and died in 1755. He was a counselor to the Parliament. Bernard’s son was the abbot of Monieramey and counselor to the Parliament. Bernard Francois, Marquis de Chauvelin, followed his father’s footsteps and became the attendant to Louis XVI.
10.The Hatoyama Family
The Hatoyama Family is a famous Japanese political family and is sometimes referred to as Japan’s ‘Kennedy Family’. The family history is traceable to Kazuo Hatoyama, who was the Speaker of the House of Representatives from 1896 to 1897. His great grandson, Yukio Hatoyama was the Prime Minister of Japan from 2009 to 2010. Yukio’s grandfather was elected as Prime Minister of Japan, three times. His father, Ichiro Hatoyama was the former foreign Minister of Japan and he served as Prime Minister from 1954 to 1956. In 2012 Hatoyama retired from politics.
The Genome, or the human genetic map, is one of the greatest all-time discoveries. Never before, has the role of genes been so well documented and clearly defined as it is today. Genes can be transferred from one generation to the next and carry their inherent ancestral traits with them. This phenomenon seems to allow families to just keep on producing ‘living legends’. But on the flip side of that coin, this phenomenon also ensures that the poor souls at the bottom of the ‘gene pool’, never see a family member make it out of third grade or across the poverty line! | <urn:uuid:1024ca60-9fdf-4f3f-b0d7-98dc7957be1d> | CC-MAIN-2017-43 | http://infomory.com/famous/famous-families-in-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00102.warc.gz | en | 0.981962 | 1,988 | 3.015625 | 3 |
Application: The use of knowledge and skills to make connections within and between various contexts.
Communication: The conveying of meaning through various forms.
Knowledge and understanding: Subject-specific content acquired in the course (knowledge), and the comprehension of its meaning and significance (understanding).
Thinking: The use of critical and creative thinking skills and/or processes, as follows:
- planning skills (e.g., focusing research, gathering information, selecting strategies, organizing a project)
- processing skills (e.g., analyzing, interpreting, assessing, reasoning, generating ideas, evaluating, synthesizing, seeking a variety of perspectives)
- critical/creative thinking processes (e.g., problem solving, decision making, research)
[This page last updated 2020-12-23 at 12h45 Toronto local time.]
© 2007–2022 Hersch, Bear & Company Limited. All rights reserved. “Grammar Authority,” “grammarauthority.com,” “touque.ca,” and “Canada’s Thinking Cap” are trademarks of Hersch, Bear & Company Limited. All other trademarks and copyrights are the property of their respective owners. | <urn:uuid:b607bd8a-a82e-477c-ae1c-ec5ec5c73cfc> | CC-MAIN-2022-27 | https://touque.ca/EC/resources/cs/achievement_categories.php | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00788.warc.gz | en | 0.880074 | 277 | 2.921875 | 3 |
Months before the Berlin Wall fell on Nov. 9, 1989, with the Soviet stranglehold over the Eastern Bloc crumbling, a young political scientist named Francis Fukuyama made a declaration that quickly became famous. It was, he declared, “the end of history.”
But the heralded defeat of Communism didn’t usher in a lasting golden age for Western, capitalist-driven liberalism. Far from it.
In the decades since, seismic events, movements and global patterns have shaped the 21st century into a splintered, perhaps more dangerous era than the Cold War.
The 9/11 attacks happened; the Iraq and Syria wars helped produce the bloody emergence of the Islamic State group and, later, a refugee crisis. The economy tanked in 2008. China became a superpower. Russia resurged. A new populism took root.
All have had a transcendent impact. History, it seemed, didn’t “end.”
Today, Fukuyama acknowledges that some developments over the decades have disappointed him. He says his book wasn’t a prediction, but an acknowledgement that many more democracies were coming into existence.
Now the world is in a phase he didn’t anticipate. In a recent interview with The Associated Press, Fukuyama took time to reflect on some of what he has seen — and what could still happen.
AFTER THE WALL: THE FIRST YEARS
With the passage of the decades, Fukuyama says, now “you have a whole generation of people who didn’t experience the Cold War or Communism.”
In those initial years after the wall came down, new countries were born and Germany reunified. But wars and conflicts also erupted after the Soviet Union collapsed and postcolonial debt-settling spiked.
Some of the 1990s’ bloodiest civil wars — Congo, Liberia — became footnotes to history. Rwanda endured a genocide that killed hundreds of thousands. Yugoslavia, ripped asunder by sustained violence, massacres and displacement, produced far more coverage and even new nations.
Western military intervention at the end of the 1990s blunted Serbia’s nationalism and unshackled Kosovo. A weakened Russia was in no position to help its traditional ally in Belgrade. But the global economy was generally strong.
Then came 9/11.
THE EARLY 21ST CENTURY: TECTONIC SHIFTS
Al-Qaida took terror to a never-before-seen level that was watched in real-time around the world. In response, the Bush administration invaded Afghanistan and ousted the Taliban, which had hosted Osama bin Laden as he plotted against the West. Eighteen years later, the United States is still there.
The Iraq War was based on false intelligence that Iraqi dictator Saddam Hussein, backed by the U.S. when he fought Iran, possessed weapons of mass destruction. Washington pushed a “you’re either with us or against us” global outreach that backfired in some places — most notably in Britain, where then-Prime Minister Tony Blair remains a political outcast to this day for following Bush.
Fukuyama was once aligned with neo-conservatives and supported the Iraq invasion, but later declared his opposition to the war. Now, he says the Iraq war undermined American policy around the world, while the 2008 financial crisis undercut the U.S. claim that it had established a good economic international order.
Says Fukuyama: “I think those two events paved the way for a lot of the populist backlash that we’re seeing now.”
POPULISM AND THE CULT OF PERSONALITY
Fukuyama says he’s dismayed so many voters could choose divisive populist leaders who lack a formula for governing democratically.
A marriage of populism and nationalism is a dominant dynamic now in many places — from Trump’s “America First” to Brexit, from Israel’s refusal to give up settlements in occupied Palestinian territory to India’s accelerated crackdown in disputed Kashmir and Turkey’s recent invasion of Syria.
Fukuyama says the populist leader’s playbook typically goes something like this: “I represent you, the people. You are pure and the elites are corrupt, and I need to eliminate them from our political system.”
But Fukuyama says he still believes that the checks and balances in democracies’ long-established institutions will continue to work.
Populism, he argues, isn’t conducive to good governance — or, necessarily, prosperity. “Launching a trade war … doesn’t seem like a very good idea for continued prosperity,” he says. “It could be that these types of movements will be self-limiting in the future.”
SYRIA, THE ISLAMIC STATE AND THE GLOBAL REFUGEE CRISIS
The ruinous civil war in Syria, in its ninth year, began with an uprising against President Bashar Assad as part of the ill-fated 2011 Arab Spring that deposed autocrats but replaced them with more dictatorship, war and chaos.
The Syrian conflict brought suffering of a monstrous magnitude: hundreds of thousands killed, millions displaced and the rise of the barbaric Islamic State group, which at one point controlled vast swaths of both Syria and Iraq and carried out terror attacks across Europe.
A byproduct of IS’ rise was the global refugee crisis and the flight of persecuted millions on a scale not seen since World War II.
To Fukuyama, the rapid rise in migration produced cultural backlash and an anti-immigrant feeling that was exploited by “a lot of pretty opportunistic politicians who saw this as a big opportunity to mobilize new sources of support for themselves.”
THE RESURGENCE OF RUSSIA
The road from the dissolution of the Soviet Union to today’s powerful Russia has been messy and not without its initial humiliations for Moscow.
Boris Yeltsin’s years in power after Mikhail Gorbachev’s ouster as the last Soviet leader were characterized by a freewheeling approach to the free market which introduced kleptocracy, the selling off of state industries and the era of oligarchs, mafia and defeat in the first Chechnya war.
Then, on the stroke of the new millennium, Vladimir Putin came to power as a counterbalance to the Western liberalism he so often rails against.
On his watch, a second war with Chechnya killed thousands. Russia invaded Georgia and annexed Crimea from Ukraine after backing Russian separatists.
With fresh dominance in its own backyard, Russia began to look further afield, most notably meddling in the U.S. election, which some say helped Trump reach the White House.
In 2018, Putin — still in power, still a risk-taker — boasted of the development of new nuclear weapons that have no equivalent in the West. They came, he said, in response to U.S. withdrawal from a Cold War-era treaty banning missile defenses and U.S. efforts to develop a missile defense system. “No one has listened to us,” he said. “You listen to us now.”
Fukuyama says of Putin: He “has created a form of Russian nationalism that is dependent on empire, (on) his control of all of the countries surrounding Russia. He feels that he is basically at war with the West. This is a hangover from Soviet times because that is the world he grew up in.”
CHINA THE SUPERPOWER
China’s authoritarian grip on anything it perceives as its internal affairs, from mass detentions and abuse of Muslims in Xinjiang Province to its no-patience approach to Hong Kong protesters, continues unabated.
Beijing’s rise in the last three decades has redrawn the geopolitical map. Its financial clout, its attempts to extend its footprint with its Belt and Road Initiative and unresolved trade issues with the United States make it a wildcard more than ever.
Fukuyama says China’s increased wealth and power is upending the international system — no matter how that power is used. But, he notes, since Xi Jinping came to power, China has moved in “a much more authoritarian direction.”
The new landscape, he says, “has led to the current deterioration of U.S.-China relations. And I’m afraid that’s a situation that is going to persist even if you had a different (U.S.) administration in power.”
FROM 1989 TO 2019: THE BIG SWEEP
Looking back from today, Fukuyama still thinks the Berlin Wall’s fall was, on balance, a huge gain for human freedom.
One of the darker historical ironies of the past 30 years — primarily in Europe — has been the shift by once-communist states to the far right, in some cases embracing ideologies not far from fascism. But despite “worries about countries like Hungary and Poland,” Fukuyama believes they are still much better off than under a communist dictatorship.
Many people don’t quite understand how being part of the European Union, for example, has afforded them peace and stability that didn’t exist before.
Today, Fukuyama looks to other uprisings — protest movements in Hong Kong, Algeria and Sudan, for example — and says he holds out hope for a new moment when history might encounter another crossroads.
He calls it the “spirit of 1989.”
Credit : Associated Press (AP) | Photo Credit : (AP) | <urn:uuid:5e985b1f-20c9-4dc3-86b9-48c3608eb8a7> | CC-MAIN-2021-39 | https://www.newdelhitimes.com/end-of-history-30-years-on-does-that-idea-still-hold-up/?shared=email&msg=fail | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00081.warc.gz | en | 0.954855 | 2,011 | 2.59375 | 3 |
Volume 38, Number 2: ERARead this issue
In March 2015 actress Patricia Arquette gave an acceptance speech at the Academy Awards for her role as a supporting actress, her remarks designed to raise awareness of the unequal status of women in the United States. To a standing ovation of women, including actresses Meryl Streep and Jennifer Lopez, Arquette called upon “every woman who gave birth, to every taxpayer and citizen of this nation, we have fought for everybody else’s equal rights. It’s our time to have wage equality once and for all and equal rights for women in the United States of America.” Backstage, after the speech, she expanded on her comments, specifically targeting the shortcomings of the American Constitution, which she argued was written for men. She stated, “The truth is, even though we sort of feel like we have equal rights in America, right under the surface there are huge issues that really do affect women. And it’s time for all the men who love women, and gay people, and others, to fight for us now.”1 Her speech generated more public dialogue about the Equal Rights Amendment than had been seen in decades. | <urn:uuid:aaa87424-c1f9-4518-81ee-0d06ad0b4fba> | CC-MAIN-2023-50 | https://frontiers.utah.edu/issue/volume-38-no-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100651.34/warc/CC-MAIN-20231207090036-20231207120036-00789.warc.gz | en | 0.969661 | 248 | 2.828125 | 3 |
Blockchain technology and cryptocurrencies like Bitcoin that rely on it are becoming more prevalent in many fields. When you see a technology being used in diverse arenas like regional voting trials, major retail, NASA research projects and refugee ID initiatives, it’s not surprising to find that funders and nonprofits are also getting in on the game.
Here, we look at some of the interesting ways blockchains and cryptocurrencies are changing philanthropy, along with some of the challenges and pitfalls.
What Are Cryptocurrencies and Blockchains?
If you already know, skip ahead! Cryptocurrencies are digital monies secured through encryption, which are typically not controlled by central banks. Blockchain technology is diverse and quickly evolving, and this is just a very general overview. A blockchain is essentially a ledger that has records (like the details of a digital money transaction) locked in groups called blocks.
The blockchain is often called a distributed or decentralized system because it keeps copies of these blocks on a spread-out network of computers, rather than on a centralized server. Every computer in the network has a matching copy of all the blocks and is said to be “running the blockchain.” The blocks of records are verified, added to the chain, and secured through cryptography, the encrypting of information. “Crypto-mining” — a complex and energy-guzzling computer process that we won’t fully cover here — both verifies the encryption of many blockchains and mints new cryptocurrency.
Though not infallible, these systems are considered very difficult to tamper with, because that would require all the connected computers in the global blockchain network to be compromised at the same time. While blockchains can be now designed for many purposes and programmed for applications in almost any field, they are often used for secure, traceable record keeping and quick peer-to-peer virtual currency transactions.
Many nonprofits now accept cryptocurrencies like Bitcoin or Ethereum, including the United Way, Red Cross, and Save the Children. All three receive digital monies through a Bitcoin payment processor called BitPay.
Save the Children, along with the Water Project and other groups, has also received donations through the BitGive Foundation, which in 2013 became the first nonprofit organization specializing in using bitcoin to fund charitable works. BitHope is another example of this type of foundation. BitGive’s campaigns have thus far raised modest thousands, making its programs similar to average crowdfunding endeavors. It also has a blockchain-based transparency initiative called GiveTrack that seeks to make donation processes clearer for all involved. (We’ll look at that in more detail below.) Citing the benefits of cryptocurrency fundraising, founder and Executive Director Connie Gallippi said, “When you don’t have to go through the traditional system of banks and governments, the money gets there a lot faster, it is much less expensive, [and] it is also cryptographically secure, so you know it is getting to who it was intended to get to.”
On another scale altogether, Fidelity Charitable, which holds the nation’s largest donor-advised fund, received nearly $70 million in cryptocurrency in 2017—10 times more than the year before.
“It is one of the fastest-growing assets that we are seeing wanting to be contributed to charity. Many people who own bitcoin or other forms of cryptocurrency do want to be philanthropic,” vice president Amy Pirozzolo said.
In 2014, for tax purposes, the IRS categorized digital monies not as currencies but as properties, similar to stocks or bonds. Donating these assets protects the giver from capital gain taxes (taxes on the profit of a sale) and gives them a tax deduction for the donation to boot. Just as many philanthropists get big tax benefits from donating stock, donors can now do the same with cryptocurrency holdings.
In the political realm, candidates can accept bitcoin as an “in-kind” donation, according to the Federal Election Commission (FEC). A handful of politicians are on board, but with some FEC guidelines creating potential gaps (like no reporting requirement for donations under $200), the totals are hard to track. Missouri Republican Austin Petersen is believed to have received the largest single digital currency donation in federal election history—0.284 bitcoin, or $4,500 at the time of donation. He used BitPay to receive the gift. Democrat Brian Forde's congressional campaign in California's 45th District received multiple bitcoin donations worth more than $66,000 in August and September 2017.
"A number of members of Congress have asked for my advice about how they can accept bitcoin, as well,” Forde said.
Several blockchain and crypto-based organizations and funders have made big payouts to nonprofit organizations. The Pineapple Fund, backed by an anonymous successful crypto-investor going by “Pine,” is a well-known, but now spent-out, star in the crypto-philanthropy sphere. The fund’s motto is, “Because once you have enough money, money doesn't matter.” It has committed 5,104 bitcoin and donated $55.7 million to 60 diverse charities including the Water Project, Give Directly, the ACLU, the Tennessee River Gorge Trust, and Women Who Tech.
In March 2018, San Francisco-based global blockchain banking platform Ripple gave $29 million to fund all the donation requests on the teacher fundraising site DonorsChoose.org—the biggest known single donation ever made in cryptocurrency. As we’ve reported, Ripple also recently committed $50 million to expand academic research on blockchain “with top universities around the world.”
Also in early 2018, decentralized payment provider OmiseGO and ethereum blockchain founder Vitalik Buterin partnered with the nonprofit GiveDirectly to send $1 million to refugee families in Uganda. GiveDirectly “allows donors to send money directly to the poor with no strings attached,” and its site shares extensive research showing the power of direct cash donations, which are usually well-spent by those in poverty. More than 12,000 households benefited from this crypto-donation, which was exchanged from the OmiseGO digital tokens into local currency.
The reduction in intermediaries that characterizes GiveDirectly’s funding structure makes it a great fit for the peer-to-peer, decentralized nature of blockchain transactions. And, blockchain and crypto-finance, which are often accessible with very low fees through a smartphone app, can empower people who don’t have established traditional banking systems in their communities. As we’ve covered, the Gates Foundation has been backing innovations using blockchain systems to boost financial inclusion for several years now.
In June 2018, Brian Armstrong, CEO of leading digital currency exchange Coinbase, started GiveCrypto.org, “a nonprofit that distributes cryptocurrency to people living in poverty.” Similar to the GiveDirectly model, this organization seeks to place funds right into the hands of people in struggling economies, where they can convert crypto into their local currency, carry out crypto-transactions, or hold it (“hodl” in crypto-slang) over the long term. Armstrong already donated $1 million himself to the organization, which has now raised $4 million in total. He hopes the fund will grow to $1 billion within two years.
Armstrong pointed out in a blog post that like many quick-rising tech entrepreneurs, crypto-investors amassed large amounts of wealth very quickly. He wrote :
[The] reputation of the crypto community has been dominated by images of ‘bros in Lambos,’ whose antics get a lot of attention. This doesn’t represent the best of our community. Most people I respect and know in the crypto-ecosystem believe we have a responsibility to help this technology reach a much wider audience.
Other Crypto-Giving Formats
Charity coins are cryptocurrencies that are usually created to fund specific causes. For example, the nonprofit Charity:Water is raising money through the sales and mining of its own digital currency called the Clean Water Coin. Similarly, the RootProject sells its Roots Tokens and uses them in crowdfunding campaigns to support various “social good” projects and nonprofits around the world, including those relating to homelessness, education and reforestation. And some charities like UNICEF Australia ask backers to donate extra computer power to mine various cryptocurrencies.
Then there is the new GiveTrack system from BitGive. It aims to use a public blockchain to offer supreme transparency, a trait often coveted within the philanthrosphere. “GiveTrack is a platform nonprofits use for taking donations and sharing with donors exactly how their contributions are used, while tying donations directly to a project result,” the site states. It is currently in beta and uses an “immutable and transparent” blockchain ledger to provide financial transaction information in real time. For example, a nonprofit would identify the specific costs for a program, and once funded in bitcoin, all of its purchases would then be viewable on GiveTrack.
“Project results are tied into GiveTrack through a reporting mechanism that provides notification of project milestones and written updates from the charity's representatives in the field,” the site states, which enables “donors to watch the progress of the project all the way to completion.”
While this initiative aims to be the epitome of precise and accountable programmatic funding, it may leave little room for nonprofits to be flexible or responsive during a project, and is a long way from the general operating support that many organizations crave. Still, the intention to provide transparent record keeping and tie donors more directly to a project’s progress is certainly interesting, and many types of works are now being funded on the platform, including wildlife crime scene training for African rangers, kids’ vision screening, and sustainable agricultural initiatives.
Downsides of Crypto-Giving
Each transaction on a public blockchain like Bitcoin’s can be viewed online, but the parties involved can remain largely incognito, carrying out their business with the use of numeric public addresses and private keys (other privacy measures for some blockchains also exist, but we won’t explore those now). The secrecy of blockchains is an interesting feature to consider in the realm of philanthropy, potentially adding another layer of shadow giving to a sphere where donation sources often remain hidden. Within blockchain interactions, an individual’s transactions are sometimes linked to their real identity somewhere in an online account and/or can sometimes be traced to an IP address, so they are not considered 100 percent anonymous.
Blockchain-based currencies can both cloak personal finance and be adopted for criminal uses like investment scams and money laundering. Regulations on cryptocurrency are complex and controversial, varying widely from country to country. For example, China banned cryptocurrency exchanges and initial coin offerings (ICOs). South Korea banned anonymous crypto-trading, and Japan blocked the trade of certain “privacy-rich coins.” Meanwhile, Venezuela launched a state cryptocurrency in October 2018, and central banks in Norway and Sweden are considering taking this step, as well.
In the E.U. and U.S., regulations are still in development. Branches of the U.S. government refer to digital currencies alternately as properties, securities, commodities and funds. New York opted to create its own BitLicense system to regulate in-state crypto-businesses.
In addition to unclear and developing regulations, market volatility can be a major deterrent to many who are considering transacting with cryptocurrencies. Bitcoin’s meteoric rise in 2017 and significant crash in 2018 is a prime example of the sharp value fluctuations these unregulated money markets can experience. A crypto-donation's value could change after being sent to a nonprofit. Of course, charities can choose to sell/convert the digital currency into fiat or government-issued money immediately upon receipt. Because tax-exempt charities don’t have to pay capital gains taxes, if they sell the crypto, the full value of the gift persists.
And, like all technologies, blockchains and digital currencies are not perfect. For example, blockchains can become congested with traffic and cryptocurrency exchanges and wallets can be hacked.
Despite complications, imperfections and growing pains, the diversity of applications for blockchains and cryptocurrencies speaks to the core technology’s usefulness, adaptability and endurance. Around the globe, innovations in this space continue to move forward as regulators play catch-up, and they don’t show any signs of stopping. Blockchain systems can move funds in a direct, secure, and egalitarian manner, and seem to many like a fertile area for philanthropists and nonprofits to keep (cautiously) exploring. | <urn:uuid:7d914d9d-9fd2-4ecb-bfa7-67c030d55e85> | CC-MAIN-2019-39 | https://david-callahan-hfwl.squarespace.com/home/2018/11/26/brave-new-world-how-cryptocurrencies-and-blockchain-are-changing-philanthropy | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573053.13/warc/CC-MAIN-20190917061226-20190917083226-00175.warc.gz | en | 0.944595 | 2,616 | 3.0625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.