text
stringlengths 0
181k
|
---|
The Mohammed bin Rashid Space Centre (MBRSC) has hosted the United Nations/United Arab Emirates High Level Forum: "Space as a Driver for Socio-Economic Sustainable Development", which was organized by the United Nations Office for Outer Space Affairs (UNOOSA) in conjunction with the United Arab Emirates Space Agency.
Under UNOOSA’s leadership, the forum delivered concrete recommendations in the form of the “Dubai Declaration”, which will contribute to the United Nations event “UNISPACE+50”, to be held in Vienna in June 2018 , which will mark the 50th anniversary of the first United Nations Conference on the Exploration and Peaceful Uses of Outer Space.
The Dubai Declaration urges all parties to further utilize the space sector as a driver for economic and social development, emphasizing that strengthening socio-economic development will require an integrated approach among the space industry and other sectors to understand and meet the needs of users and the society at large. The recommendations emphasize the need for building stronger international cooperation and coordination in the peaceful uses of outer space at all levels, and the need for broadening access to space. The declaration asserts that space exploration is a long term driver for innovation and strengthening international cooperation on an all-inclusive basis and creating new opportunities for addressing global challenges. It also affirmed the need to strengthen youth and women's involvement in the space industry.
Participants of the Forum also declared that space economy, space society, space accessibility & space diplomacy constitute the main pillars of the Space2030 – the global space agenda that will emerge from UNISPACE+50.
“This Declaration and what we have learned at this Forum have provided us with an understanding of how we should move forward in utilising space for development, on the need to get open access to space for an increasing number of countries and in assisting States to attain the sustainable development goals,” said Simonetta Di Pippo, Director of UNOOSA. Ms Di Pippo also thanked MBRSC for their excellent work in organizing the Forum.
He continued: “The hosting of the forum comes as part of the UAE Space Agency’s strategic goals of building and strengthening international relationships and partnerships in the field. This stems from a belief in the importance of international cooperation and of developing relationships with the most important stakeholders in the global space sector. These approaches are in line with the strategic plans and visions for the state to establish strong international cooperation and exchange knowledge with other nations around the world."
Al Mansoori added: “The UAE’s experience in space has been inspirational to many countries in the region. We are pursuing our ambition for space exploration, integrating space technology in our national development projects, developing a regulating legal framework for the space sector, establishing specialized research centers and developing knowledge transfer programs, hoping that this will contribute to our national goal in positioning the UAE as one of the best nations in the world”.
"The topics discussed at the forum are highly important and essential to achieving a sustainable future for the space sector; and the "Dubai Declaration" gave recommendations that enhance and activate the role of space science and technology in achieving comprehensive economic and social development," Al Mansoori concluded.
Yousuf Hamad Al Shaibani, Director General of MBRSC, said: "The Dubai Declaration is of historical significance for the global space sector as recommendations and trends delivered therein will contribute to a more flourishing future for the global space sector."
Al Shaibani added: "Hosting this forum in the UAE and the issuance of the Dubai Declaration affirms the UAE’s attractive positioning for key stakeholders in the global space industry, and the trust placed in the UAE by the global space community."
Al Shaibani stressed on “the importance of continued compliance with these trends and future courses that aid harnessing space technology and applications to address any challenges for the greater good of humanity”. He added: “The UAE has always supported UN trends with respect to the optimal use of space as a core sector and a driver for socio-economic sustainable development. |
Want to boost your odds of making a Hollywood blockbuster? Hire more minorities, according to a new study spearheaded by UCLA sociologist Darnell Hunt.
After crunching the numbers on hundreds of movies and TV shows, the fifth annual “Hollywood Diversity” report found people of color bought a majority of the tickets for five of the top 10 films in 2016. But representation for minorities is still lagging, according to the study, with less than 14 percent of lead actors being people of color. Women aren’t faring much better, either, with less than a third of leading roles. If movie moguls hire more women and minorities to fill these roles, according to the study, audiences will reward them at the box office.
“Consistent with the findings of earlier reports in this series, new evidence from 2015-16 suggests that America’s increasingly diverse audience prefer diverse film and television content,” the report said.
TV shows with at least 20 percent minority representation benefited as well, receiving higher social media engagement and better ratings, according to the study. Looking to this TV season, the study said results are “mixed” so far, with minorities increasing their share to 28 percent of lead roles, while women have lost ground compared to previous years. |
If the name Elisabeth de Waal sounds familiar, then you probably read her grandson Edmund de Waal’s book The Hare with Amber Eyes, a memoir of their family, the Ephrussis, wealthy Viennese Jews by way of Odessa.
The Exiles Return, Elisabeth’s posthumously published novel, is not as engaging as The Hare With Amber Eyes, but its portrayal of post World War II Vienna and Elisabeth’s unique perspective make it a worthwhile read.
Elisabeth de Waal, born Elisabeth von Ephrussi, was raised in the grand and gilded Palais Ephrussi on the Ringstrasse in Vienna. After studying law, philosophy, and economics at the University of Vienna, she moved abroad. She bravely returned to Austria shortly after the Anschluss (the act that allowed Germany to annex Vienna) to retrieve her parents who lingered too long in the mistaken belief that their status as prominent Austrian citizens overrode their Judaism. After the war, Elisabeth devoted over a decade attempting (with limited success) to reclaim her family’s looted art collection from the Austrian government.
In The Exiles Return, three characters return to 1954 Vienna. Although their circumstances are quite different, their lives ultimately intersect in (rather melodramatic) ways.
Kuno Adler, a Jewish research scientist, leaves his wife and daughters in Manhattan to return home as part of the repatriation program sponsored by the Austrian government. Theophil Kanakis, a wealthy member of Vienna’s Greek community, is looking for fun and bargain buys. And eighteen-year-old Marie-Theres Larsen is a bored American teenager on an attitude adjustment trip to her mother’s family.
Overall the novel is a bit stilted, but interestingly several “third rail” topics such as Nazi atrocities, homosexuality, abortion, and suicide are broached. While not shocking today, these would have been risqué in the 1950s when Elisabeth wrote the novel.
I appreciated this novel, but it is possibly more noteworthy for the writer than the writing.
Elisabeth de Waal was born in Vienna in 1899. She wrote five unpublished novels, two in German and three in English, including The Exiles Return. She was married to Dutchman Hendrik de Waal and lived in Tunbridge Wells. She died in 1991. |
Creating the basic structure of a Visual Studio project (inserting copyright info, adding commonly used assembly references, setting the default namespace/adding file system folders etc.) quickly becomes tiresome — this is particularly true when working as a web consultant, creating solutions for a wide range of customers.
Using project templates which provide the basic structure of e.g. a component can save a lot of time in the long run, as most of the trivial configuration tasks can be handled automatically. Project templates also offer an excellent opportunity to provide your colleagues with implementation examples and boilerplate code which follow company and industry best practices.
Create a project resembling the intended structure and content of the template.
Copy all the files in the project folder to a suitable subfolder in the Visual Studio project template root.
Add a basic vstemplate file to the template folder.
Add custom parameters to the vstemplate file and use them in the various “file blueprints” of the template.
Use the template to create a new project. Check to make sure all custom parameters and files are inserted as intended. Repeat steps 4 & 5 until satisfied with the result.
Make the template available to colleagues by e.g. deploying it to a network share.
Steps 1 – 4 are described in more detail below.
Simply create a project, add some files to it, set root namespace and similar assembly information, add company and copyright info etc.
The project template root is configured in Visual Studio via “Tools → Options… → Projects and Solutions → User project templates location”.
Put the files from step 1 into the subfolder matching the project language beneath the project template root, e.g. “[project template root]/Visual C#/[…]” for a C# project.
At this point the contents of e.g. “[project template root]/Visual C#/Reason→Code→Example/Component” simply consist of the project file (Component.csproj) and two class files (“/Properties/AssemblyInfo.cs” and “Constants.cs”).
Without a vstemplate file, Visual Studio will not recognize the files from step 2 as a template, and will hence not show it as an option in the “Add new project”-dialog. For an in depth description of the vstemplate format, see MSDN.
An example of a vstemplate file (“template manifest”) is shown below, intended to be used in componentized solutions. The manifest makes it easy for me to tailor company specific information to clients, thus not having to modify other files in the project template. When giving lectures in Sitecore component architecture I usually hand out a project template based on the manifest shown below to developers at the start of the course.
To use parameters in a file template, open it in a text editor, insert the parameter and make sure the ReplaceParameters-attribute is set to “true” for the appropriate node in the vstemplate file (e.g. “<Project File=”Component.csproj” TargetFileName=”$fileinputname$.csproj” ReplaceParameters=”true”>”).
Using parameters in other file types follows the same pattern.
After creating the project, the root namespace, framework etc. have been set to the values selected in the wizard.
The layers “DAL”, “BL” and “UI” have been created as folders. As mentioned earlier, this is a good way of conveying company best practices to colleagues, helping to enforce a uniform structure recognizable across all solutions your company is maintaining. |
I need help with viruses on my computer. I have Norton installed but I still get viruses. I can't prevent them.
In this excerpt from Answercast #89, I look at why anti-virus software always lags behind the malware designers and how you can keep yourself safe on the internet.
So... There are a couple of different issues that I want to talk about with this question.
One is that there is no single anti-virus or anti-malware product that will absolutely catch every single virus or Trojan or other form of malware that's out there. There just isn't.
I have an article on that: "I run anti-virus software. Why do I sometimes still get infected?"
The bottom line is that it's ultimately a race. It's a race between the different kinds of anti-malware technologies and the people who write malware. They come up with new techniques; new things to exploit and they push their viruses and malware out quickly, more quickly than software like anti-malware tools can be updated.
Imagine... it takes a little bit of time for something as complicated as an anti-malware tool to actually respond to tricky techniques with the complicated algorithms that they need to use to detect viruses - without detecting false positives.
So it's a race. Malware is being created every day and anti-malware software is being updated. But it always (almost by definition) lags behind by some amount of time - during which you and I, and everyone else, are vulnerable to these new and increasingly more destructive pieces of malware.
So, that's one of the reasons that I always, always so strongly recommend that you make sure your anti-malware software is up to date with the latest version.
Make sure that the latest version of that software is configured to get its updates automatically - particularly the updates to the database of malware that they use. That typically will get updated at least once a day and sometimes more often.
Make sure that your software is configured to automatically take those updates.
So, with that out of the way... "You can't prevent them?"
Unfortunately, I strongly disagree. If you're constantly getting malware; if you are constantly getting viruses on your computer, to put it bluntly, you're doing something wrong.
Now, something wrong may be as simple as not having a firewall installed or having other technical issues like: not keeping your anti-malware software up to date, and so forth. Do those.
But more importantly, nine times out of ten, viruses don't arrive on your computer because they tried to get in, they arrive on your computer because you invited them in. That typically takes one of two forms: either you downloaded and ran an attachment that came via email - or you downloaded a file of some sort and ran it from a site that is (to put it bluntly) less than reputable.
So, a lot of malware prevention is, in fact, in your own hands. It's part of your own habits and how you operate your computer on the internet safely.
So I'm going to point you at another article. It's actually what I consider to be my most important article - and that is "Internet Safety: Keeping Your Computer Safe on the Internet." It covers not only what I've talked about here: malware, malware tools, keeping things up to date and so forth but also talks a little bit about making sure that you are using the internet in a safe way.
That's what I would suggest you start with; that you review those things - and as I say, make sure that your software is being updated and updated regularly.
How did I get all these viruses if I have anti-virus software? Anti-malware tools need to be run frequently enough, and be kept updated to keep you safe. I'll look at what to consider when configuring protection.
Follow Leo's suggestions. A friend of mine uses and subscribes to Norton and for years would ask me to fix his computer. He finally got smart and contacted Norton support and hasn't called me. Did that work? Personally I ditched anything Norton 7 years ago. Tried 5 other pay AVs and then tried the FREE AVs. Never looked back. Nada, zilch, No problems for 3 years. Just my 2 cents. BTW, Got McAfee as a trial on a laptop free for 6 months 7 years ago and it seemed to work okay.
I used Norton for a number of years until I wised up & scanned my PC with several of the freebies which immediately detected a number of unopened, Trojan infected zipped files that my bloated Norton had ignored for many months.
Then to my dismay I discovered the infected files were much easier to remove from my system than the self inflicted Norton virus itself!
Not to herald Norton Internet Security package, but it's been the best thing I've ever had. And Customer Support is now one of the best. I call and get a tech. Three times in the past two years I had a problem and blamed a virus. I sent msg and they remotely accessed my machine and fixed registry problems that were not even a virus.
Self inflicted viruses are not uncommon, but lately most of the clean up I've been doing for customers has been "drive by" installs. A recent example: the PC of an office admin for a manufacturers representative was infected by the order entry site she uses every day to do her work. The malware creators injected invisible code onto an innocent site, and that code silently and automatically infected viewers of the site. No action needed by the viewer, and nothing visible to alert that there was a problem. Her PC was fully patched, and had up to date anti virus. I could not tell her any way to avoid a similar future scenario. This kind of story is now the majority of what I deal with.
As a follow-up on Jim Murphy's comment, I do computer repair and cleanup and like Jim's customer, I'm getting a lot of infected coomputers coming in from drive-by infections. One was from a state run job application site. No warning, no install prompt, just "Boom" and it's infected.
That is the majority of the cleanups I do now. I'd estimate half of the cleanups are from 'invitation' and the other half are from drive-by.
There was also that worm back in the mid 2000's where all you had to do was connect to the internet and you were infected.
So, while Leo's advice is sound and solid, he is doing his readers a disservice by saying one can be virus free by being careful where you go on the 'Net or that one invited these viruses.
I still believe that many viruses are "manufactured" by anti virus sellers. Who else makes profit out of a virus?
Much, if not most, malware is designed to make money through various means such as stealing passwords and bank account information, and turning computers into spam sending robots. It is highly unlikely that companies such as Norton would have to create viruses to stay in business. However there are some rogue antivirus programs that infect your computer with malware which holds your computer hostage in order to get you to buy their unlocking program. |
Richard Collins is the Chairman and Chief Executive Officer of Istation. With nearly 5 million students enrolled worldwide, Istation is impacting teachers and students globally through ground-breaking education technology that blends direct and systematic instruction with animation, game-like interactions, and unintimidating fun.
As an entrepreneur, Richard has invested in and/or operated businesses in real estate, energy, and media since 1972. He started his career in banking, becoming chairman of two commercial banks early on. He has also acquired and developed ranch, commercial, and residential investment property; participated in oil and gas drilling ventures and acquisitions; and served as a director of privately owned energy companies.
He has served as a principal in several media investments, including motion picture theaters, radio stations, and newspapers. Richard and several partners formed Istation in 1998.
In this interview, Richard tells us a bit about his entrepreneurial story and why he started Istation. He also shares the epic moment he had managing cash flows and what he and his team did to resolve it. Read on, enjoy!
Where you start is less important than that you start. Pick a good team and a good idea and see where it takes you. There are ups and downs in any business, but you will never know what they are until you start down that road.
What made you start Istation and make significant changes in the industry? How did the idea for your business come about?
My family has been interested in education, specifically kids who didn’t get every break in the world, for generations. In trying to make a difference, I’ve found that some issues are simply radioactive. For example, vouchers are radioactive on both sides, but everyone is in favor of the effective use of technology in school — that’s a universally positive issue.
In my personal experience, I’ve found that every child has a unique ability — it could be as a student, could be as an athlete — but in order to be successful in life, every child needs to learn how to read. It's the idea of merging technology with the fundamental need to read that ultimately led to the formation of Istation.
How do you find people that truly care about your business? How important is having good employees?
It’s simple: teamwork makes the dream work. Good employees are absolutely essential to our business. We're a mission-driven company, with our mission being to support educators, empower kids, and change lives. It's a job that energizes even the most cynical amongst us. You can’t look at a kid’s proud, smiling face and not realize that you're making a difference in the world by working for Istation.
A lot of people want to make money, but an equal number want to make enough to get by simply knowing they are making a positive impact on the world.
To turn mission-driven work into a profitable enterprise is the real challenge. In my world, patience plus persistence produces profits.
How did you get Istation off the ground? Did you bootstrap or pitch with local accelerators or VCs?
I’m a Texas entrepreneur, and by that I mean I’ve made and lost money in almost every way imaginable. I’m lucky that by the time I came across Istation, I had enough in the bank that we could self-fund a lot of the needs that we have.
At Istation we’ve taken on minimal debt, avoided outside investors, and have only 28 shareholders (all of whom work at the company today). A healthy balance sheet gives us a lot of flexibility that others don’t have. Too many companies take on too high a debt load or trade away equity for impatient investors; part of our success is that we don’t have to deal with those issues.
What would you say are the top 3 skills or traits needed to be a successful entrepreneur?
Patience: Success doesn't happen overnight.
Persistence: There are boundless setbacks. It’s not the falling down that matters, but the getting back up that counts.
Profits: The best way to ensure an enduring business is to focus on the bottom line. A healthy balance sheet makes for a healthy business. It's also important to gauge the long-term cost effectiveness of a strategy; don’t give up a longer-term opportunity simply because you're after a quick profit.
In 2008, we were a multimillion-dollar revenue business, but we were losing about a million dollars a year on a cash flow basis. We were about to run out of money, and I knew we needed another $1 million to break even.
I took it upon myself to solicit additional funding from investors and investment banks, but I didn’t like their terms. Outside investors seemed to bring with them a lot of trouble. So I figured out a way to bridge the cash gap — nothing fancy, and no layoffs. We managed our cash ruthlessly, deferred projects for as long as possible, and focused manically on revenue.
Within a year we had doubled revenue, and we were generating positive cash flow. It’s a fun story now, but at the time I felt we were at the brink of collapse.
Marketing in the field of education is difficult. Teachers don’t want to hear from a company about how amazing it thinks it is; teachers want to hear from other teachers about what really works.
At Istation, we have created a program that's made by teachers for teachers. Our goal from a marketing perspective is to get teachers so excited about our program (its ability to improve student outcomes and save teachers’ time) that they can’t help but tell their teacher friends about us. That kind of word of mouth can’t be bought, but it can be cultivated with the right program and the right people behind it.
My mother is one of my great inspirations. She was the first woman on the Dallas City Council, and in those days it took more than a little guts to debate with the 13 or so other men on the council. She taught me to believe in myself, stand up for the little guy, and never stop fighting for what's right. I feel very lucky to have had her as an influence in my life.
What's the biggest mistake you’ve made while running Istation?
We’ve been very lucky in that we haven’t made a whole lot of mistakes. When we do make a mistake, we own up to it quickly and fix it as soon as possible.
There's a saying that goes, “When you win, use ‘we’. When you lose, use ‘me.'" I like that saying and think it fits the Istation culture well.
Do you think it’s important for an entrepreneur to have hobbies other than work? What do you do in your free time?
Absolutely. Some of my best ideas come from spending time outside of the education space. I'm a big history buff and love to explore the lessons of people and civilizations from ages ago. I was recently in Rome and remarking on both the building and the fall of a great empire. There are many lessons we can apply even today from the Romans.
I don’t think it's important what your hobbies are, just that you have and pursue them with passion.
What advice would you give to entrepreneurs starting a business in Texas? Where should they start? |
This interactive multimedia program examines animals and animal classification with tutorials, audio narration, and stunning photographs. The program teaches the characteristics which distinguish plant from animal, and one animal from another. Each program provides a framework to classify organisms by the hierarchy of classification. Detailed descriptions of the characteristics, lifestyles, and environments of simple animals, worms, molluscs, spiny-skinned animals, joint-legged animals, fish, amphibians, reptiles, birds, and mammals are provided. Students will learn about the interdependence of life forms, adaptation, migration, learned and instinctive behaviour, evolution, and social structure of animal groups. Students will study natural selection; animal self-protection by movement, teeth and tails; and the balance of nature. The programs offer a comprehensive basic vocabulary and taxonomy of biology, defining each word with precise, self explanatory illustrations. Areas covered in this illustrated dictionary are: Protozoans, Sponges and Coelenterates, Flatworms and Echinoderms, Molluscs and Miscellaneous Groups, Arthropods (except Insects), Insects, Lower Chordates, Fishes, Amphibians, Birds, Reptiles and Mammals. |
modular plastic types can increase food conveyor hygiene standards by at least 10 times, and in some cases by more than 100 times.
Openness of the Flat-Flex results in less build-up of contaminants than plastic modular belts, as well as making cleaning easier and allowing visual inspection of drive shafts without the need for dismantling. The advantages of stainless steel over plastic for belting include easier and more effective cleaning as well as greater resistance to damage resulting in scratches and crevices that can lead to increased opportunities for attachment and growth of bacteria.
Research in the UK shows that with fish and meat Flat-Flex picks up fewer bacteria, maintains a lower level of contamination over time and is easier to sanitise, possibly because the gaps in plastic modular belting cannot be as readily cleaned as the stainless steel belting and harbour bacteria with quicker recontamination of the belt as a consequence. Drive shafts and the undersides of plastic modular belting are particularly difficult to clean in comparison with Flat-Flex belting.
to contain trapped debris, even after thorough sanitising and rinsing. Experiments with carrots showed that Flat-Flex could usually be cleaned to a satisfactory level with just one clean but plastic modular belting often required a second or even third clean to reach a standard acceptable for production to start.
The increasingly rapid growth in bacteria on plastic modular belting compared with Flat-Flex stainless steel belting, especially after two hours, is shown by results of the study with chicken meat (fig 1) after sanitisation with Multikleen.
In the USA, where Flat-Flex is approved by the US Department of Agriculture (USDA), research shows that, with proper cleaning and sanitising schedules, stainless steel belting reduces the problems of biofilms forming on product contact and non-contact surfaces. Consisting of microbes and substances that protect them from surrounding environments, biofilms can harbour potentially dangerous pathogens and create reservoirs of contaminants that are very difficult to eradicate completely. Once a biofilm is established, bacteria living within it can withstand stronger doses of sanitising agent – up to 3000 times stronger than unattached cells – and are more resistant to heat. Bacteria can also be loosened and contaminate product flowing over the biofilms.
Design features of Flat-Flex and Compact Grid help to eliminate the crevices and hard-to-reach places where biofilms form, and also help to improve hygiene levels generally, especially in high-usage and difficult-to-clean areas of conveyor belting. They have between 70 and 85 per cent open framework structure, are designed to reduce or eliminate areas where product or debris can become lodged and do not typically need to be removed from the conveyor system for cleaning. |
The British economy is expanding again at a respectable pace. George Osborne, the Chancellor, has sustained much criticism for policies of tight austerity to restore the public finances even while Britain was in recession. He is at least able to claim that the UK has the fastest-growing economy in the G8; ahead of his Autumn Statement he promised a “responsible recovery.” Living standards, however, are stubbornly not rising. This dichotomy is increasingly crowding out other subjects in economic debate. Ed Miliband, the Labour leader, insists that the party is “fighting for all the people of our country now facing the worst cost-of-living crisis in their lifetimes.” If Labour manages to frame the election argument so that it is about disposable incomes rather than output growth and the deficit, it will have a politically potent theme.
Yet stagnation in incomes and wages is an issue on which little serious political thinking has been done because it defies easy ideological rationalisation. It isn’t an issue specific to the management of the economy by the coalition government. It’s a phenomenon that stretches back many years.
Labour denounces wage stagnation and income inequality, but it is not the sole repository of that call. The appeal of populist movements on both wings of politics, but especially on the right (the Tea Party movement in the United States is an example), lies in large measure in the impression that households on middle incomes are losing out. While wage and income stagnation ostensibly offers an opportunity for Labour, it is at least as likely to encourage populist campaigns on the right that are driven by protectionism and hostility to immigration. (Labour’s own defensive and increasingly illiberal rhetoric on immigration suggests that this point is not lost on the party leadership.) There is a sensible and progressive argument, founded on economic efficiency as well as equity, for more direct government intervention to raise median living standards. Unfortunately, as is still not adequately recognised by Ed Miliband and his colleagues, the state of the public finances—a legacy of Labour’s mismanagement under Gordon Brown—makes this extremely difficult.
Since the bitter global recession of 2009, the national economic debate has focused on quarterly GDP figures as a measure of whether—and how confidently—the UK has emerged from this catastrophe. There are many things wrong both with the debate and the measure, and these deficiencies are unfortunately compounded by the media’s thirst for controversy. While politicians have argued about double- or triple-dip recessions, Britain’s economy remains smaller than its pre-crisis peak. If it had merely continued to grow at its long-term average rate since 2007, it would be some 30 per cent bigger than it is now. The political argument, moreover, takes a drearily predictable form. The government declares that its strategy of austerity has laid the foundations for recovery; Labour complains that economic growth is not being felt by households in higher standards of living.
Labour is not wrong on this, but the story is more intractable than it intimates. Economists are familiar with the flaws of judging welfare by GDP growth. GDP measures the market value of all goods and services produced in a year. GDP per capita divides this by the mid-year population. At the most basic level, neither measure takes account of broader measures of welfare such as happiness or health. More particularly, GDP per capita isn’t a measure of individual wellbeing. A better measure for this is median household incomes (not the arithmetical average, which can be skewed by outliers at either end of the income distribution). As this measures, by design, the income in the middle of the distribution, it is a reasonable proxy for living standards.
Figures from the Office for National Statistics show that UK median household incomes over the past 35 years have generally tracked GDP. For the middle fifth of households, average disposable income in 2010/11 amounted to £24,400. In real terms this is almost 1.8 times the equivalent figure in 1977. The gross income of the middle fifth of non-retired households has increased from £20,300 in 1977 to £37,000 in 2010/11. This is, on the face of it, much better than the experience of the US, where median household incomes have expanded at a consistently slower rate than GDP. The reason for this difference may be related to the very rapid expansion of incomes at the top of the income distribution in the US—more so than in the middle—whereas in the UK the growth rates at the top and the middle have been more similar.
But in the UK there are variations around the economic cycle. From the mid-1990s to the middle of the last decade, when the economy was expanding briskly between recessions, growth in median household incomes outstripped GDP. But in the years immediately preceding the economic downturn, it slowed quite sharply. In the five years from the onset of the downturn, it fell by almost 9 per cent in real terms, though this effect was cushioned to some extent by cash benefits and a fall in the proportion of incomes eaten up by taxes. Taking account only of non-retired households, gross incomes of the middle fifth of households fell by slightly over 6 per cent but direct taxes have fallen by substantially more.
A progressive tax system can ease the stagnation of incomes up to a point but there are limits that politicians are loath to talk about. The most fundamental is the shift across the advanced industrial societies towards knowledge-based economies. Yet the commitment of the previous Labour government to get at least half of young people into university has had a countervailing effect. Figures from the Department for Business, Innovation and Skills show that the proportion of school-leavers entering higher education in 2011/12 was only just under 50 per cent. Employers are typically seeking graduates even for relatively low-skilled occupations. Yet the corollary of this is that degrees, being plentiful, no longer command such a premium in the marketplace.
Two longer-term trends thus appear quite stubborn. First, higher education is not translating into the boost to incomes that graduates may have expected. Part of the reason may be that students in higher education are not in large numbers taking degree courses that are particularly in demand from employers. Second, growing technological demand for labour is also leaving behind semi-skilled and unskilled workers.
Meanwhile, the general relationship between disposable incomes and output growth appears quite problematic. Retail sales picked up in 2013 but not because of a rise in disposable incomes. Whereas there has been cumulative real GDP growth of a little over 4 per cent since the recession of 2009, aggregate household disposable income has not recovered—it’s been broadly flat. So no one in particular is feeling better off even though the economy appears at last to be recovering.
Now, it is possible for standards of living to increase even as real incomes stagnate. This happens when goods and services improve in quality or decline in price, or both. It wouldn’t be hyperbole to say that the way households live has been revolutionised in the past 20 years by technology—principally consumer electronics and the emergence of the digital age. The extraordinary expansion of capacity and quality has been accompanied by a dramatic fall in absolute prices. This may now present a problem for politicians in that consumers do not regard these advances as compensation for stagnant incomes; rather, they are part of modernity itself. What is more, household incomes are being squeezed by rises in prices for more basic goods and services demand for which does not change when prices do, especially gas and electricity.
Nobody knows what the political effect of these pressures will be. Labour believes that it has an election-winning theme with a call for an energy price freeze. This would make little economic sense if it doesn’t take account of industry costs but that does indeed appear to be the proposal. Populism is a potent message in these circumstances and the danger is that it might be expressed in peculiarly damaging ways.
During the long business expansion between the mid-1990s and the middle of the last decade, a sense of wellbeing was buttressed by the easy availability of consumer credit. As everyone knows, this was a mirage. The risks of lending had not, in fact, been mitigated by new financial instruments and securitisation: they had simply been passed around the financial system. That irresponsibility is not going to be repeated soon, especially given the regulators’ insistence that the banks build up their capital reserves. At some point interest rates will rise and in any event cheap borrowing to finance consumption will not be available.
In short, there has recently been a slight easing in credit conditions and households have responded to better economic news by increasing their net borrowing. But this is unsustainable and in any event reflects mainly a breathing space in the very painful process of deleveraging that households have had to engage in since 2008. Economic anxiety about stagnant incomes can’t be dissipated by the same means adopted in the business expansion. Households have too much debt; they aren’t in a position to drive a consumer boom even supposing that this were desirable.
There is a strong case, on grounds of efficiency and justice, to reduce inequality, and the least disruptive way of doing that is to raise median household incomes. This might be done by increasing cash benefits. The aggregate value of these benefits, such as tax credits and housing benefit, has roughly doubled for middle-income households over the past 35 years. As a proportion of gross income, cash benefits amount to a little over 10 per cent for the middle fifth of non-retired households. The scope for boosting incomes by increasing these cash benefits is severely limited by budgetary constraints, however.
Here’s the main political problem. As living standards stagnate and a direct measure to boost them isn’t available, there will be increasing political pressure to opt for bad solutions. In many respects it’s encouraging that the squeeze on living standards hasn’t so far given a boost to populist parties of the far left and far right (disregarding the exceptional by-election victory of George Galloway in Bradford West, one of just a handful of seats in the UK where he might be a credible candidate). But the temptation for bad policies intended to preserve living standards is constant. It’s observable particularly in the controversy over EU enlargement and the free movement of peoples across national borders. I recently debated with Nigel Farage, leader of Ukip, on the EU and even my debating partner, a Liberal Democrat MP, criticised the previous Labour government for opening Britain’s labour markets to Polish and other eastern European immigrants in 2004. No politician, apparently, is prepared to say that this was a good thing that boosted the economy by opening it up to new skilled workers. It’s widely recognised among labour market economists in Germany that it was a mistake for their country not to adopt the same policy—the skilled migrants generally came to the UK.
It makes eminent economic sense to boost demand and mitigate the stagnation of real incomes by encouraging more skilled immigration. The experience of eastern European immigration since 2004 has been twofold: far more people came than the Labour government expected, and I don’t deny that concentrations of new immigrants in particular localities can put a strain on services. But the economic effect was overwhelmingly positive. New immigrants, being generally of working age, contribute more to the Exchequer than they take out. This issue is one respect in which policies to mitigate the central economic problem in political debate dismayingly aren’t attractive to politicians.
Other ways of addressing wage and income stagnation may be costly and won’t be immediately effective, but can be justified as investment rather than current expenditure. There is a strong case for state investment in improving literacy and numeracy for young people entering the workforce with low skills, and for providing lifelong learning and education. There is also a stubborn market failure in that if companies invest in training they risk losing their more skilled workers to competitors. The only short-term course will have to be largely costless regulatory changes.
What needs to be avoided are populist measures that are held out as a palliative but which will damage living standards. Trade protection is an extreme example, but not a fanciful one. There is also a broader risk that the case for increasing median incomes and narrowing extremes of inequality is confused with a more thoroughgoing aversion to enterprise. Not all inequality is bad. Income inequality is in some respects a signalling device. It indicates to young workers where they should improve their own skills and allows a smoother allocation of capital. The problem comes when inequalities in income are not obviously justified by workers’ differing marginal contributions. The single greatest cost of banks’ behaviour during the boom years may not even be in huge sums of taxpayers’ bailout money; it may instead be in spreading the notion that high rewards are unrelated to effort and ability.
Finally, there needs to be a recognition of intergenerational equity. Younger workers face the prospect of supporting retirees who, in number, will come to swamp them. That needs to be addressed progressively by a steady rise in retirement age and slowing the rate of increase in pensioner benefits.
The rest of this decade will be a risky and disruptive time for the British economy. The problem of income stagnation and inequality does not look imminently resolvable. Median incomes are being squeezed over the long term; incomes at the lower end might be brought up progressively by investment in skills and some form of subsidy. But it is difficult to envisage cash benefits to middle income households rising significantly. Direct interventions are justifiable and necessary, but only where they are, in effect, capital investments to enhance people’s earning power. The risk is that politicians will instead intervene to protect markets and keep out skilled workers from overseas. Unfortunately, that course appears to be gaining popularity across the political divide. |
Scientists have identified a protein which they may play a major role in the development of Alzheimer's disease. In a report in the journal Science Translational Medicine, the researchers said that the brain autopsy of some Alzheimer's patients have shown high levels of a protein called GPR3.
And the experiments by eliminating the protein in mice that have a disease that shows improvement in the condition of the animal. But doctors say it needs more research is needed to see if the same results can be achieved in humans. |
The Extraordinary Synod of Bishops on the Family in October 2014 and the Ordinary Synod on the Family in October 2015 have made reflection on the vocation and mission of the family, both in the Church and in the modern world, very timely. So during 2015, Theology 101 will explore the Church’s teaching on many of the themes that are being considered by the two synods.
In order to provide the necessary context for a consideration of the single-parent family, we must first orient ourselves to our ultimate end. The Catechism of the Catholic Church states in the first paragraph of the first page that we were created freely and out of love by God for eternal life in communion with God. Communion with God, who has revealed himself as a loving Trinitarian communion of Father, Son and Holy Spirit, is then the destiny planned for humanity.
It follows that if we are to truly live and be most fully alive, it only makes sense that we need to live in harmony with that for which we are made. Communion with God, i.e., being of the same mind, with the same love, united in heart, thinking one thing with God (Ph 2:2), becomes the goal and foundation of Christian life.
The family is the original cell of social life. It is a community where one can learn moral values, begin to honor God and exercise freedom in a good way. The family offers opportunities to care and take responsibility for the young, the old, the sick, the handicapped and the poor. It is an initiation into life in society, as the family teaches us to see others as brothers and sisters of our one heavenly Father.
The final document of the Extraordinary Synod of Bishops on the Family in October 2014 (8) recognized that many “children are born outside of marriage, in great numbers in some countries, many of whom subsequently grow up with just one of their parents or in a blended or reconstituted family.” In addition to these “out of wedlock” births, divorce, separation, outright spousal abandonment and the death of a spouse all contribute to the increase in single-parent families we see today throughout the world.
The Church necessarily and rightly affirms the sanctity and indissolubility of marriage. The Church also asserts the right of every child to be born within the context of committed, marital love because it provides the best conditions for raising children.
The synod called for respect to be shown to those who suffer unjustly because of the actions or death of a spouse. Pastoral care, material assistance and guidance must be directed to single-parent families to help them bear the responsibility of providing a home and raising their children.
Single parents: to be faced with all the responsibilities of parenting by yourself is a challenge that touches the very core of your life. We bishops express our solidarity with you. We urge all parishes and Christian communities to welcome you, to help you find what you need for a good family life, and to offer the loving friendship that is a mark of our Christian tradition.
Wherever a family exists and love still moves through its members, grace is present. Nothing – not even divorce or death – can place limits upon God’s gracious love.
And so, we recognize the courage and determination of families with one parent raising the children. Somehow you fulfill your call to create a good home, care for your children, hold down a job, and undertake responsibilities in the neighborhood and church. You reflect the power of faith, the strength of love, and the certainty that God does not abandon us when circumstances leave you alone in parenting. |
Three-dimensional digital models of the lower deciduous incisor from Riparo Bombrini (left) and the upper deciduous incisor from Grotta di Fumane (right).
A newly published study reveals that Homo sapiens belonging to the Protoaurignacian culture may have been the ultimate cause for the demise of Neanderthals.
Researchers from the University of Bologna, Italy, and the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, analyzed two deciduous teeth from the prehistoric sites of Grotta di Fumane and Riparo Bombrini in Northern Italy. The state-of-the-art methods adopted in this study attribute the teeth to anatomically modern humans. New AMS radiocarbon dates on bones and charcoal from the site of Riparo Bombrini, along with previously published dates for the Grotta di Fumane sequence, show that these teeth represent the oldest modern human remains in an Aurignacian-related archeological context, overlapping in time with the last Neanderthals. The results have strong implications for our understanding of the interaction between modern humans and Neanderthals, as well as for the debate on the extinction of the latter.
Stefano Benazzi from the University of Bologna and colleagues from the CNR Institute of Clinical Physiology (Pisa, Italy) compared digital models from CT scans of the human tooth from Riparo Bombrini with those of modern human and Neanderthal dental samples. Digital methods were used to compare the internal features of the dental crown, namely the thickness of the enamel. The results showed that the specimen from Riparo Bombrini belonged to a modern human.
Viviane Slon and colleagues from the Max Planck Institute for Evolutionary Anthropology were able to analyze the mitochondrial DNA from the Fumane 2 dental specimen, discovering that its mitochondrial genome falls within the variation of modern humans and basally in haplogroup R, which is typical for pre-agricultural mtDNAs in Europe.
Sahra Talamo from the Max Planck Institute for Evolutionary Anthropology undertook a comprehensive program of radiocarbon dating to establish a firm chronology for the tooth from Riparo Bombrini, ascertaining that it is about 40,000 years old. |
What Are The Best Snoring Cures?
Most people have had to suffer with spending time in a home with someone that snores from time to time, but if your sleep is disrupted by your partner’s snoring, then that is something that you really have to do something about.
There are many different types of snoring cures that you can try, some of which work and some of which don’t, that should be able to help you and your partner get the rest that you need.
If there is one area of snoring cures that rarely gets enough attention, it is that of lifestyle changes. There are many things that people do that makes snoring much worse and can be changed fairly easily.
For instance, if you are a smoker, there is a much greater likelihood that you will also be a heavy snorer. The same goes if you drink a lot of alcohol, especially close to bedtime. If you are able to control both your drinking and your smoking, you will find that your snoring is greatly diminished.
Studies have also shown a significant link between people who are overweight and those that snore. It seems that by losing weight, you will have less of a chance of snoring and you will sleep more comfortably at night, so you will win both ways.
Another of the more popular snoring cures is to change the position that you are sleeping in. If you are spending most of the time on your back every night, then you will be more likely to snore. That is one of the reasons why a spouse will jab you in the middle of the night, to get you to move over – by sleeping on your side, your snoring usually ceases.
There are many herbal and homeopathic remedies that will also allow you to sleep more soundly without being drugged out like you would be with alcohol or traditional sleeping medications. These allow you to get a sound sleep, but you will not feel “hung-over” or groggy in the morning.
There are also some snoring cures that work to prevent you from getting in the position that causes snoring. Most snoring is caused by your jaw opening and your tongue falling back into your airway, and this is why you snore more when you are on your back.
There are some snoring cures that prevent this from happening, such as mouthpieces that generally work very well. You can buy these from your dentist or doctor, but you can also buy very good ones online that are not only guaranteed to stop you from snoring, but guaranteed to give you a much better night of sleep.
If you think that snoring cures are only necessary because you are keeping someone else up at night, then think again. In fact, your own health is at risk every night that you spend snoring away. When you snore, you are not getting the oxygen into your body that you need, and this means that your organs, including your brain and your heart, are starved of oxygen.
Sleep apnoea, which can result from excessive snoring, is a serious health problem, but instead of buying an expensive and unwieldy sleep mask from your doctor, you should try some of the above snoring cures first. Either way, it is important to realize that chronic snoring is a sign that you are not sleeping well, so when you take care of your snoring, you will also be giving yourself the best opportunity to get a good night’s sleep, too. |
become the primary source for irrigation water for a group of pioneering families that would establish this Mother Colony.
Mission grape vines. Hansen’s main irrigation canal followed along a ridge through town, today marked by Sycamore St.
This gives Anaheim its curious orientation, with the east-west streets tilting to the southwest.
most of northern and central Orange County.
Hannum, reporting on current water well levels.
California development, was desperately needed. The Colorado River would be this savior.
construction of the largest public works project in the world.
community informed of the benefits that an unlimited supply of filtered and softened water would bring to the southland.
Colorado River from its red rock dash to the sea.
inspection trips sponsored by the member communities. Anaheim’s well known civic leader and first MWD Director, Mr. O.E.
Clerk and M.W. Martenet Jr., City Councilman. Mr. Hapgood would later accept Mr. Steward’s MWD Directorship.
much awaited Colorado River finally poured into Anaheim water mains from connection #A1 on July 25, 1941.
(growing to 166,801 by 1970). Funded by a supportive City Council, the Water Department, guided by Anaheim’s own Mr.
safe liquid from 28 wells and 8 MWD connections in this modern Mother Colony. |
In China, Spring Festival is one of the most important festivals. It is also getting more and more popular in some foreign countries. When Spring Festival comes, it means that a new year comes, and people grow a year older. During the festival, it is very crowded throughout the country.
On the eve of Spring Festival, parents get food, clothes and Spring Festival's goods prepared. The people who work outside come back, and the whole family gets together to have meals and say goodbye to the last year, and welcome the New Year. After the meal, they wait until midnight comes. They set off fireworks then.
On the first morning of the Spring Festival, everyone wears their new clothes and then go to other's homes to celebrate the New Year. Each family sets off fireworks when their guests come, and they take out sweets and peanuts to share. On the following days, they go around to their relatives and friends. The Spring Festival has several meanings. It means people working outside can come back to relax themselves, a new year begins. When spring comes, farmers begin to plant crops and people make a plan for the New Year.
All the people throughout the world pay much attention to it. Our country of course holds some national celebrations to celebrate it. This most traditional festival in China will go on being celebrated in the future.
The Spring Festival is very important to Chinese people.
In the past, people could not often have meat, rice or other delicious food. They could only eat these during the Spring festival. So every year they hoped that the Spring Festival would come soon. Now, although people's life is much better, and we can eat the delicious foods everyday. People still like the festival. Because most people can have a long holiday, and we are free to go on a trip or visit our friends or have parties with our family. In the evenings, we can have a big meal in the restaurant or stay at home with family and watch the TV programmes.
I like the Spring Festival very much. How wonderful the Spring Festival is! |
Mindfulness, MD represents the natural progression of a mindfulness project a colleague and I created during medical school.
My colleague, Andy, and I were working with patients undergoing treatment for opiate addiction when we conceived of a mindfulness workshop.
We designed a brochure that summarized three simple mindfulness exercises based on work done by Jon Kabat-Zinn at the University of Massachusetts. The brochure was intended to serve as an introduction to mindfulness practice.
Andy and I took our patients through the three exercises in a workshop format that encouraged an open dialogue among participants.
Our mindfulness workshops were well received, and before we left the rotation we conducted a workshop for the clinic staff at their annual retreat. The mindfulness exercises were just as beneficial for the health professionals as they were for the patients dealing with the more concrete problem of addiction.
After the rotation ended I felt that my work was incomplete. From discussions with patients and medical professionals during our workshops, it was clear that there was both a place and a need for mindfulness in medicine. And despite the vast number of resources on the Internet, I couldn’t find a centralized site that provided all the necessary components for starting a mindfulness practice.
A friend recommended that I start a site of my own to fill the gap in digital materials. This same friend gave me the copy of Eckhart Tolle’s The Power of Now that initiated my own philosophical inquiry, so to say that I trusted her counsel is an understatement. I have now completed medical school and I am a psychiatric resident.
I have been lucky enough to run multiple seminars on the neuroscience of mindfulness and have included the informational handouts as a new tab on my homepage (see Neuroanatomy tab).
I have included the original brochure, prompts, and scientific research so that the reader may view the foundation that Mindfulness, MD was built on.
My posts are conceived as quarterly self-contained discussions of one aspect of mindfulness. I often reference my own life to provide examples for the reader to appreciate mindfulness (or mindlessness) in action.
My writing is my own. It is a personal interpretation of that which I have read, practiced, and been trained in.
I hope that the reader agrees that what began as a simple workshop necessitated this expansion into the digital realm. If even one reader is inspired to begin a mindfulness practice, then I have succeeded by my own standard. |
The Crozet Islands are home to thousands of king penguins. Click to enlarge.
If you are not a marine biologist, the chances are that you have never heard of the Crozet Islands in the South Indian Ocean. These French administered islands, with a human population of only 40, are a unique nature reserve and one of the remaining 'untouched' places on earth. But it’s not only the unique flora and fauna that make these remote islands a very special place.
In February 2016, Jerry Stanley and Mario Zampolli, hydroacoustic experts from the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), went on a mission to add another important role for the archipelago; the installation of the CTBTO’s last hydroacoustic station, HA04.
The hydrophones float in a channel where sounds travels very efficiently. This channel varies in depth around the world's oceans, but is typical found at a depth of several hundred meters at Crozet.
HA04 is one of eleven hydroacoustic stations that monitor the oceans for signs of nuclear explosions. Low frequency underwater sound, which can be produced by a nuclear test, propagates very efficiently through water. Consequently these underwater sounds can be detected at great distances, sometimes thousands of kilometers, from their source. This means that the International Monitoring System (IMS) requires only a few hydroacoustic stations to provide effective monitoring of the world’s oceans for signs of nuclear explosions.
The IMS also monitors the Earth’s crust and atmosphere: 170 seismic stations monitor the ground for shockwaves, 60 infrasound stations listen for atmospheric waves, while 80 radionuclide stations sniff the air for traces of radioactivity – see interactive map.
The Marion Dufresne II, one of the world’s biggest and most sophisticated oceanographic research and deep water survey vessels, supported the mission. It is specifically designed to withstand the extremely rough weather conditions, which are often found around the Crozet Islands.
The installation of hydrophones at one of the most remote places on earth is a challenging ocean engineering project and a complex logistical operation.
The station will use six hydrophones (underwater microphones) to monitor underwater sounds deep in the ocean. The HA04 hydrophones send their data via underwater cables, which are tens of kilometres long, to a receiving facility, the Central Recording Facility (CRF), on the island. From there the data is forwarded via satellite to the CTBTO in Vienna.
With no airstrip available on, or anywhere near, the Crozet Islands the HA04 installation team and its equipment was transported to the island on the Marion Dufresne II, a French oceanographic research and deep water survey vessel - a roundtrip journey of over 5,000 kilometres.
The experts’ first task was to install the upgraded CRF on the archipelago’s main island, Possession Island (Île de la Possession). The CRF includes communication equipment and equipment for processing the hydrophones' data. Within a few days of arrival of the team all the equipment was shipped to the island, installed and the dedicated satellite data connection to Vienna commissioned and tested.
Possible north and south cable routes (black lines) superimposed upon the bathymetric survey data.
The second task was to pave the way for safely landing, routing and laying of the underwater cables connecting the CRF to the hydrophones. Sea bed bottom surveys were conducted from a depth of 10 m to depths of over 2,000 m so the team could see the bottom topography and detect any underwater objects that could interfere with the safe installation of the cables. In past centuries, the island was used by the whaling industry so objects hazardous to the installation such as shipwrecks, machinery and anchors were to be expected; and indeed shipwrecks were identified.
A recent recording of whale song by hydroacoustic station HA03, Juan Fernandez Island, Chile.
Thirdly, all objects that might compromise the installation were removed from the sea floor- this included over 50 m of antique anchor chain. These clearance operations were supported by divers and wherever the divers went they were accompanied by curious penguins that live in tens of thousands on the island. Together with rare migrating birds, sea elephants, orcas and endangered large whales, they are a part of Crozet’s rich protected fauna. All activities of the team followed strict environmental standards and operations were overseen by the nature reserve.
Once completely established, hydroacoustic station HA04 can help monitor, and thus protect, the local fauna. For example, hydroacoustic data can be used to track the migration of whales. Find here an example of a whale song recorded by a CTBTO hydroacoustic station. |
Miller, Pulaski, and Yahola soils occur on similar areas. Miller soils have a fine control section. Pulaski and Yahola soils have a coarse-loamy control section and lack a mollic epipedon.
Soil testing is the best guide to the wise and efficient use of fertilizer and soil amendments, said Manjula Nathan, director of the University of Missouri Extension Soil Testing and Plant Diagnostic Services.
But, because magnesium is so easily used and quickly depleted from soil, my plants went from stupendous to magnesium-deficient pretty quickly,since all the new growth couldn't keep up with the shifts of magnesium bioavailability.
2018 High Efficiency Ball Miller With 42 Years Experience, Find Complete Details about 2018 High Efficiency Ball Miller With 42 Years Experience,Ball Miller,Ball Miller,Flour Miller from Sand Making Machinery Supplier or Manufacturer-Zhengzhou Mining Machinery Co., Ltd.
We evaluated effects of natural and man-induced wetland management techniques in moist-soil wetlands and hardwood bottomlands via 3 experiments at Noxubee National Wildlife Refuge, Mississippi. Specifically, we tested effects of autumn mowing, disking, and tilling on aquatic invertebrate and moist-soil plant responses. |
How to get away with math assignment?
Homework Buddies: Two students sitting on one desk for a term becomes homework buddies. There are few math assignments which we have to solve together. I need to discuss math problem with my homework buddy and then we have to solve it together. Rather than solving a problem, we end up debating over our views on it and thus end up wasting time.
Calling out names: The teacher calls student and ask him/her to give answer to the homework problem, moving quickly through the rows. Thus my math teacher ensures that every student in the class completes the math assignment.
Math Presentation: The math teacher assigns one problem to each student. The student comes to the board, solves the problem, and explains his/her method. Points are assigned based on the method used, accuracy of the answer and the way the student has presented.
Math test: Frequent math test is conducted in class. We have to solve the test in the first half of the class and later we have to self-correct our work in different colored pen. These tests make me realize my understanding of the math topic.
These new ways of giving math assignment and math homework is consuming most of my time and I am always on a look out for math assignment help. I do take math assignment help from math Assignment Experts and then spend my time in understanding the solution they have provided me. Thus I end up answering all math problems in classroom, scoring good marks in presentations and test and also enjoying my free time pursuing my hobbies. |
The exclusion of countries, peoples or individuals from high-profile summits and conferences often says much about the events themselves. As the Mideast conference in Poland convenes with U.S. Vice President Mike Pence, Secretary of State Mike Pompeo and Israeli Prime Minister Benjamin Netanyahu shaping the narrative in Warsaw as part of the Trump administration's global push to isolate Iran and promote Israeli interests, it's no surprise that Tehran is not invited.
President Clinton presides over White House ceremonies marking the signing of the peace accord between Israel and the Palestinians with Israeli Prime Minister Yitzhak Rabin, left, and Palestinian leader Yasser Arafat, right, in Washington. In 1978, While President Jimmy Carter, Egypt’s Anwar Sadat and Israel’s Menachem Begin cemented the Camp David peace accord with a three-way handshake at the White House before the world’s cameras, the Palestinians were markedly absent. They hadn’t been included and references to the West Bank and Gaza did nothing to mollify anger among the stateless seeking a state.
Iran has denounced the conference as an American anti-Iran "circus." The Palestinians have boycotted the conference and urged others to do the same. Notable absences are those of senior officials from France, Germany and Russia as well as various non-Gulf Arab nations.
Here's a look at some major summits and conferences over the years which have seen key players excluded or refusing to attend. THE PALESTINIANS: CAMP DAVID TO MADRID While President Jimmy Carter, Egypt's Anwar Sadat and Israel's Menachem Begin cemented the Camp David peace accord in 1978 with a three-way handshake at the White House before the world's cameras, the Palestinians were markedly absent.
They hadn't been included and references to the West Bank and Gaza did nothing to mollify anger among the stateless seeking a state. In 1991 in Madrid, the Palestinians were represented but only as part of the Jordanian delegation in a contentious and acerbic Mideast conference that saw Syria and Israel openly trading insults.
Two years later, Palestinian leader Yasser Arafat shook hands with his Israeli counterpart Yitzhak Rabin with President Clinton beaming alongside them. The Palestinians were now much more so on diplomacy's world stage, but a quarter of a century later they are no closer to their elusive goal of independence.
SOUTH KOREA'S MOON: DMZ BUT NOT SINGAPORE OR VIETNAM North Korea's dynastic leaders, as much by design and desire as exclusion, were always on the outside of international gatherings looking in from afar. That changed in a series of seismic events for the Korean Peninsula in 2018, following a year where threats of nuclear Armageddon were at the fore.
North Korean leader Kim Jong Un is preparing for his second summit with President Donald Trump in two weeks in Vietnam. This follows the mind-boggling spectacle of their first encounter in Singapore last year. South Korean President Moon Jae-in has been a key driving figure with determined plans and aspirations for engagement with Pyongyang.
Moon himself held historic summits with Kim at the Demilitarized Zone on the border between the two nations. Now Moon finds himself excluded again as Trump and Kim take center stage. There is speculation about a possible four-way meeting, also including Moon and Chinese President Xi Jinping, to declare a formal end to the Korean War, which stopped with an armistice and left the peninsula still technically at war.
DAYTON AND THE BOSNIAN SERBS A city in Ohio became the byword for ending Europe's worst conflict since World War II. The Dayton Accords ended the Bosnian War which had claimed hundreds of thousands of lives and displaced more than 2 million people as former Yugoslavia broke apart in a frenzy of communal violence.
The grim-faced presidents of Serbia, Croatia and Bosnia and Herzegovina were all in attendance in Dayton in 1995 and for the official signing in Paris the following month. But absent were the group blamed by many for some of the worst bloodshed and persecution in the war, the Bosnian Serbs. Serbia President Slobodan Milosevic, himself an international outcast, represented his fellow Serbs' interests in Dayton.
The Bosnian Serb wartime leader, Radovan Karadzic, by then already an indicted war criminal was a fugitive evading justice. He was finally captured in 2008. |
Mattress vacuum cleaners are very specialized vacuums. They are mostly handheld corded or sometimes even cordless units optimized for cleaning dust mites, bed bugs and allergens from mattresses, pillows, cloth sofas and other cloth furniture and upholstery, car seats, carpets and other similar areas and surfaces.
Mattress vacuum cleaners often have very distinctive shape and are easily recognizable. One of their main features is that they are mostly used by being pulled over the surface, not pushed or pushed/pulled like many other vacuums.
Strong suction is required to pull out stubborn dust and allergens, but also various bed bugs, regardless of their size or their maturity level (eggs, adults, anything in-between).
Mattress vacuums are mostly corded units, allowing them to be compact and rather strong at the same time. However, there are cordless mattress vacuums, too, like Dyson V6 Mattress vacuum cleaner. Dyson V6 Mattress do cost more than other mattress vacuums, but it doesn’t require mains power to operate, which can be very helpful in many situations.
To improve suction and cleaning effect, some mattress vacuums have two suction inlets, one below the unit and one in front of the unit. When being pulled, lower inlet remove the dirt from the cleaned surface, while front inlet helps remove any additional dirt that is released in the air from the mattress.
To help remove deeply embedded dirt, dust and especially bed bugs, mattress vacuum cleaners often have beater pads or bars. Such motorized mattress tools oscillate on high frequencies and agitate mattress, carpets or cloth fibers in order to release the dust, bugs and allergens.
Such vibrations significantly improve the cleaning effect of mattress vacuums.
In order to kill bed bugs and bacteria, and to help sanitize mattresses, some mattress vacuums use heat in the form of hot air. Maximum air temperature is usually limited to ~130°F (~55°C) – such hot air is safe for humans (will not cause the burns or redness of the skin, after the skin is shortly exposed to the air stream) and is safe for most materials used for mattresses. Just to be sure, check your mattress if it can be cleaned by such vacuum cleaners.
UV light of certain lengths is very deadly not only for humans (can cause skin cancer, for example), but also for bed bugs. Such UV light, in combination with hot air will dry the mattress, but also will remove moisture from the bed bugs, too, which will, in the end, kill them.
For safety reasons, mattress vacuums with UV lights have sensors, which turn off UV lamp when vacuum is not firmly pressed onto the surface being cleaned.
Also, some surfaces can lose color over time, when being exposed to UV lights!
Since such dead bugs are still threat to the human health (allergens), they must be removed from the mattress permanently – that is why vibrating pads, good suction and HEPA air filtration are so important.
To keep the air clean, although small and relatively cheap, most mattress vacuums are true HEPA vacuums – this way, all of the dirt, allergens and bed bugs are safely kept inside the vacuum, without being released back into the air.
Operation of such vacuums is simple – as the vacuum cleaner is being pulled, hot air heats the mattress and dries out any moisture in the mattress, but also it dries bed bugs and help killing them.
After hot air, UV lamp irradiates the mattress with deadly UV light, which further disinfect the mattress.
Air stream of the main suction port/inlet removes dirt and beg bugs, not only from the surface, but also from the deeper layers of the mattress, thanks to the air stream strength and beater pads/bars.
As said before, some mattress vacuums have second suction port, which removes any dirt, dust or allergens being released in the air by vacuum cleaner itself.
Note that effective cleaning width of such vacuum cleaner is NOT the width of the vacuum, but roughly the width of the hot air port/UV lamp/suction port/beater pads/bars. Thus, such mattress vacuums often require plenty of time to cover larger area.
If there is larger outbreak of the bed bugs, there are other things one can do to get rid of them and to prevent any other future reappearance.
Individuals very sensitive to bed bugs due to asthma and various allergies, can use mattress vacuum cleaners to improve their living. However, in order to increase the level of cleanliness even more, consider using some non toxic, organic bed bug killer in combination with both mattress vacuum and the bed bug proof mattress encasement/protector.
It should not cause stains on surfaces when applied directly to the bed sheets, mattresses, seats, carpets etc.
Most convenient bed bug killers are in the form of sprays, but can be found in powdered form, too. IMHO, sprays are easier and more convenient to use.
Before the first use, be sure to thoroughly read the instructions. Generally, apply the agent to the ‘problematic’ surface, wait a little bit and then vacuum it using strong ordinary or preferably mattress vacuum cleaner.
After the mattress is cleaned (just vacuumed or chemically treated first), it is perhaps the best thing to do is to protect the mattress by using high quality mattress encasement or protector.
Such mattress protector should be fully breathable, washable, waterproof and bed bug proof and should protect the mattress from all six sides.
By being waterproof, it will prevent any open water/juice/milk/urine etc. to penetrate into the mattress, while being breathable will help to remove any residual moisture from the mattress.
Since such protection is long lasting form of protection, in the case of issues with bed bugs, clean the mattress thoroughly using chemicals and good mattress vacuum and protect it for future use.
There are other ways of getting rid of the bed bugs and keeping the home clean.
For example, choose bedding that can be washed on high temperatures like 200-205°F (~95°C) in washing machine. During summer, take mattresses outside – mattress vacuums can have UV lamps, but nothing compares to the sun (if the mattress is not too sensitive!). During winter, freezing temperatures can kill bugs deep embedded deeply in the mattress and similar. |
Effective job analysis forms the core of a scientifically sound and legally defensible human resource system. The major goal of a job analysis project is to gather information about specified jobs within an organization in a comprehensive and systematic fashion. This information can then serve as the basis for organizational planning and design.
Job analysis information can be collected in a number of ways: interviews, questionnaires, observation, time and motion studies, etc. COHRE generally uses a structured interview method in combination with observation and/or questionnaire follow-up; however, this can be adjusted to meet your organization's specific needs.
COHRE's approach has several advantages, one of which is that it allows for two-way communication to clear up misunderstandings concerning the purposes of the project and allows for employees without strong verbal skills to demonstrate what they do on the job. Interviewing employees enhances the employee feeling of participation and allows for a double check on the accuracy of the information. In addition, the desired products act as check on the job analysis process itself. Finally, the process is amenable; if the organization wishes to change aspects of the analysis, a meeting with immediate supervisors and management can be arranged.
Updated, accurate job descriptions are the direct product of a job analysis. For the other five fields, job analysis provides information as a basis for developing the systems (e.g., training, performance appraisal, compensation). |
1. ECHO – Can our voice be heard if we speak up together?
First piece from our series on democratic values in Central Europe, especially V4 countries, is dedicated to whistleblowing. The difference between whistleblower and the snitch is invisible, yet essential. The whistleblower doesn’t protect his/her interest, but public interest. Many times against much stronger authority and by risking the consequences and loses (of job, career or status).
In our video one of the most well known whistlblowers from Slovakia, Zuzana Hlávková, is starring. But each of our countries have its own less or more known heroes.
The video-clip “Pan ?ytajnik” reminds us that the freedom of access is not only about being not afraid to ask questions, but also about being not scared to answer those and deliver the information. Although this applies to every aspect of our lives, in democratic countries it is the most basic and most valuable tool for dialogue between the citizens and the state structures.
Besides the European Convention on Human Rights, which guarantees the right to information, every each of Visegrad countries has its own registration regarding the obligations of authorities in disclosing the info and about the procedural rights of civil society, such as Access to Information Act (2001) in Poland or Act nr. 106/1999 On free access to information in the Czech Republic.
First piece from our series on democratic values in Central Europe, especially V4 countries, is dedicated to conflict of interests. We might agree that our everyday troubles are based in clashes of demands, wishes and aims, and we would also agree that these belong to our personal sphere and can not be solved anywhere else than there.
Quite the opposite situation is a conflict of personal interest that brings a personal benefit to a public actor or prevents him from reducing his or her benefit. Those, who are appointed to take care of public issues shall not apply their own personal interest into their agenda. Yet often they do. Therefore there is a legislation on conflict of interest of public officials (deputies, municipality representatives, even policemen etc.) trying to define, avoid and publish the behaviour which would bring personal benefits to those of civil service.
In our video, we are introduced to an architect, Pavel, who thinks about his new ambitious project for public space in the city. But at the same time he thinks about impressing his young colleague Vera. We find him dreaming of his new big building in the centre (and about impressing Vera). We’ll see him sketching (both projects:) quite arrogantly, submitting his proposal to the commission of town-hall.
We shall leave up to him, how he resolves the conflict between friendship and romance, but we shall not leave up to him the solution of the other one. Watch to learn more.
4. Sporting? Yes! But what if you do not exactly want to play the field?
Corruption in sport is usually connected to forgery of results or cheating, in our mind. But there are other, more complex issues which influence the growth of young talents and sport activities in general.
Malpractice in distribution of public money throughout the state structure seems to be far and detached from our lives. But it has direct impact on our everyday lives, apparent in sport activities. Unfair distribution of public funds to sport grounds and devices can spread the disillusion to people, who had been eager to develop their skills and lives.
In our movie a young, talented soccer athlete is practicing on a dilapidated soccer field. We see that he yearns to be like his sport idols some day. One night he is watching the news and hears that substantial funding will be given to soccer in the region, which makes him thrilled. Later he goes to his run-down soccer field again, to train even harder. Then the things turn out in a bit unexpected way. Or actually expectable…? |
Discover the Persian horticultural heritage of Iran’s delicate and beautiful gardens, whose historic importance has earnt them a place on the UNESCO World Heritage List.
Divided always into four sections and featuring water as a central element, the Persian garden is a representation of paradise on earth, tracing its symbolism back through Islam to ancient Zoroastrian culture. The gardens are all the more impressive considering the feats of hydrological engineering which have raised such verdant sanctuaries from the dry earth of Iran.
From Tehran in the north to Isfahan and Yazd in the centre and Shiraz in the south, explore extraordinary palace and city gardens, and discover how the heritage of ancient Persia flourishes in modern Iran.
Arrive in Tehran in the mid-morning and transfer to your hotel for check-in.
In the afternoon, enjoy lunch and a welcome briefing and then begin your explorations of Tehran with a visit to the Treasury of National Jewels, where the most priceless collection of jewels and gems anywhere in the world is housed in the vaults of the Central Bank of Iran. Continue to the ‘Fire and Water Park’, to see the award-winning ‘Nature Bridge’ connecting together two of Tehran’s public parks.
In the evening, enjoy a special welcome dinner with Genevieve and fellow travellers.
This morning, travel to Sa’ad Abad Palace Museum in northern Tehran. Elements of the Persian Garden can be seen here in the significance of the layout of the garden in relation to the palace windows.
Return to the city for lunch and a visit to the Golestan Palace and Museum Complex, former residence of the Qajar Dynasty shahs in the 19th and early 20th century. Visit the Marble Throne Hall and walk through the palace’s rambling gardens, featuring marble fountains, mosaics, stained-glass work and stone latticework.
Dinner is at a local Persian restaurant.
Begin the day with a visit to Iran’s National Museum, which is home to the Archaeological Museum and the Islamic Museum. These museums display some of the finest treasures of Persian history, ranging from stone tools to sculpture, pottery, painting and glasswork, covering a period of 9,000 years.
After lunch, continue to the Glass and Ceramics Museum, which exhibits pottery dating back to 4,000 BC, and the Carpet Museum, where the gallery’s collection features Persian carpets from various regions of Iran.
Check out of the hotel for a morning flight to the city of Kerman, located in south-east Iran (flight included in tour price). Upon arrival, visit the 17th century Ganj-Ali-Khan Complex, composed of a bathhouse, bazaar and caravanserai, and later visit the 14th century Friday Mosque and the 18th century Hammam-e-Vakil bathhouse, transformed into a traditional teahouse with graceful archways and tiled walls.
After lunch, discover the Harandi Gardens, part of the former residence of Kerman’s governor and hidden behind high walls, and now open as a museum of archeology and musical instruments.
Embark on a full-day excursion to the surrounding sites of Kerman. First, explore the desert citadel of Rayen, thought to have foundations from the Sassanian era of 224 – 649 BC. Inhabited until 150 years ago, this preserved Mediaeval mudbrick city has survived numerous natural disasters that have destroyed similar citadels in the region. Then, stop at the shrine of renowned Iranian poet and sage Nematollah Vali, where the twin turquoise minarets, reflecting pool and courtyards have been described as the most magnificent architectural masterpiece of old Persia.
On return to Kerman, enjoy lunch and then a visit to the Shahzadeh Garden, one of the nine UNESCO World Heritage-listed Persian Gardens. Built in the late 1900s, and encircled by distant mountains, the cascading fountains and waterways provide engineered irrigation to the garden.
Check out of the hotel for a day’s drive to Yazd, through regions of pistachio fields and pomegranate orchards. En route, visit the 400-year-old Zein-o-Din caravanserai situated along the ancient Silk Road, and the UNESCO World Heritage-listed Persian Garden of Bagh-e Pahlavanpour. Supplied by abundant water from underground channels, the gardens are over five hectares in size and flanked by plane trees. Here traditional Iranian architecture can be seen to be melding with modern 20th century design ideals.
Arrive in Yazd in the late afternoon and check in to the hotel.
Spend a full day of sightseeing in Yazd, a city known for its unique Persian architecture and recognised as a World Heritage Site in 2017. Visit the 14th century Friday Mosque, which has the highest portal and minarets in Iran, and the active Zoroastrian Fire Temple, where the fire inside has been burning for the last 1500 years.
Check out of the hotel and travel to Shiraz. Stop en route in Abarkuh, a typical desert town and enjoy tea under the shade of a 4,000-year-old cypress tree.
Travel to Parsargadae, the remains of the palaces of Cyrus the Great, the founder of the First Persian Empire. The world’s oldest extant garden layout and the first of the UNESCO World Heritage-listed Persian Garden, it is believed that the world’s first formal chahar bagh (fourfold garden), laid out with water rills dividing the quartets, was created here. Although little remains today of the gardens, excavations reveal the existence of basins and channels.
Continue to Shiraz and arrive at the hotel in the late afternoon.
Begin the day with a talk by Genevieve, then explore ‘the City of Nightingales and Roses’ with a visit to the tombs and memorial gardens of Iran’s greatest lyric poets, Hafez and Sa’adi. Then, travel to the northwest part of Shiraz for a visit to the second UNESCO World Heritage-listed Persian Garden, Bagh-e Eram, known for its cypress trees, ornate Qajar Dynasty palace and orange groves.
In the afternoon, explore the late 19th century merchant home and garden of Narenjestan, meaning ‘Place of Oranges’. The gardens, lined by date palms, are based around a central water channel and pools on either side. Later, visit the pink-tiled 19th century Nasir-ol-Molk Mosque, and finish the day in the heart of the city at the Vakil Bazaar of Shiraz, considered to be the finest in Iran.
This morning, visit Persepolis, one of the most important historical and archaeological sites of the Ancient World. Sacked by Alexander the Great in 330BC, Persepolis was rediscovered in the 1930s after being lost under the Persian sands for centuries. See the famous bas-reliefs, depicting kings and courtiers and gift-bearing representatives of tributary nations of the Persian Empire.
At Naqsh-e-Rustam, see Ka’ba-ye Zartosht, the enigmatic cuboid building which is thought to have served as a Zoroastrian fire temple or the mausoleum of an unknown shah. Gaze up at the Egyptian-inspired Royal Tombs of the great Achaemenid shahs and the seven magnificent Sassanian Dynasty rock-reliefs, including a relief depicting the famous victory of Shapur I over the hapless Roman Emperor Valerian in the 3rd century AD.
Spend the morning discovering the gardens of Shiraz. Begin at the Arg-e-Karim Khan, an 18th century citadel and garden, followed by the historical Nazar Garden. Then, wander through the Jahan Nama Garden, the oldest gardens in Shiraz. Established in the 13th century, this walled garden features the classic Persian arrangement, with four broad avenues bordered with cypresses, roses and orange trees. A rill lined with 64 fountains stretches from the central pavilion down one of the avenues while plantings of yellow, purple, red and white flowers populate geometric flower beds.
In the afternoon, visit Bagh-e Dolgosha, also known as the ‘Garden of the Heart’s Delight’. Conclude the day at the Afif-Abad Garden, which surrounds the Royal Palace of the Safavid Dynasty (1501 – 1722). Ornamental decorations in this garden show a mixture of Achaemenid, Sassanid, Zand and Qajar influences.
Check out of the hotel for a full day’s journey by road to Isfahan. En route, stop at the historical complex of Izad-Khast, a 17th century caravanserai and bridge, where the architecture style and composition of the mud fort is unique to this region.
Continue to Isfahan and arrive at the hotel in the late afternoon.
Begin the day with a morning talk by Genevieve, then enjoy a full-day tour of the beautiful city of Isfahan, the 17th century capital of the Safavid Dynasty shahs. Visit the Armenian quarter, the Orthodox Cathedral of Vank and the famous bridges of Shahrestan, Khajou and Sio-se-pol, which stretch serene and golden across the languid Zayandeh River.
In the afternoon, visit one of the world’s grandest squares, which inspired the proverb ‘Isfahān nesf-e Jahān’ – ‘Isfahan is half the world’, and two of the Islamic world’s greatest mosques, the Sheikh Lotfollah and the Iman Mosque. Following this, visit the Hasht Behesht pavilion, set amongst tree-lined alleys, a reflecting pool and water rills.
This morning, visit the magnificent Friday Mosque, and its famous Uljaytu Mihrab (Prayer Niche), an elaborate stucco work of the 14th century Il-Khanid Dynasty. While construction on the mosque first began in the 8th century, successive dynasties added to it until the 20th century, and today Isfahan’s Friday Mosque is considered to be a museum of a thousand years of Persian religious architecture in one building.
Afterwards, travel to the 17th century Chehel Sotun Palace Garden, a UNESCO World-Heritage listed Persian Garden. The pool opposite the palace reflects back the garden’s twenty-columned portico, giving rise to its name ‘The Palace of Forty Columns’. Later, delight in an exploration of Qeisarieh Bazaar, one of the oldest and largest bazaars of the Middle East, with hundreds of stores displaying the arts and crafts for which Isfahan is famous.
Check out of the hotel and depart for Tehran via Kashan.
In Kashan, visit a fine example of a 19th century merchant residence known as Taba-Tabai House. Then, stroll through the historical UNESCO-listed garden of Fin, built in the mid-16th century. Distinctive features of the garden include the cedar trees, extravagantly-decorated pleasure pavilions, bubbling fountains and turquoise-tiled water rills.
Continue to Tehran for arrival in the early evening.
Enjoy a leisurely start to the morning.
In the afternoon, travel to the north of Tehran for a visit to the Niavaran Palace Complex surrounded by lush gardens and used as a summer home for various Shahs of the Qajar and Pahlavi dynasties from the late 18th to the late 20th century. Continue to the Jamshidieh Park at the base of the Kolakchal Mountain. Known as the Stone Garden, the garden’s design is centred around the park’s cascading waterfall and pond, with channels of water flowing parallel to the garden paths to the lower reaches of the park. A favourite retreat amongst Iranian locals, the park offers panoramic views of the city below.
Tonight, celebrate the conclusion on the tour with a special farewell dinner with Genevieve and fellow travellers. |
Однако самоубийство – это не выход из положения. Мари, жизнь которой превратилась в сплошной кошмар, призналась: «Мне, конечно, приходили в голову мысли о самоубийстве. Но я понимала, что, пока я жива, у меня остаётся хоть какая-то надежда». Да, расставшись с жизнью, ничего не решить. К сожалению, многие подростки, впав в отчаяние, оказываются неспособными представить себе другой выход из положения или допустить, что всё закончится благополучно. Мэри, например, пытаясь скрыть свою депрессию, начала колоться героином. Но уверенности в ней было хоть отбавляй, пока действовал наркотик. Где же искать выход из данной ситуации?
1) Why are most people fascinated by things out of the ordinary?
2) What psychological disorders are mentioned in the text? Give short characteristics of each of them.
3) Is a person, suffering personality disorders, dangerous for society?
4) Can all these disorders be treated nowadays? Why or why not?
2. Look through the text and give an oral summary of psychological disorders.
Each year in the US, some 25 000 to 30 000 wearied, despairing people will say no to life by electing a permanent solution to what may be a temporary problem. In retrospect, their family and friends may recall signs that they now believe should have forewarned them – the suicidal talk, the giving away of possessions, or the withdrawal and preoccupations with death. One-third of those who now succeed will have tried suicide before.
Actually, few of those who think suicidal thoughts (a number that includes perhaps one-third of all college students) actually attempts suicide, a few of these succeed in killing themselves. Most individuals who commit suicide have talked of it, and any who do talk about it are at least sending a signal of their desperate or despondent feelings.
To find out who commits suicide, researchers have compared the suicide rates of different groups. National differences are puzzling. The suicide rates of Ireland, Italy, and Israel are half that in the USA, those of Australia, Denmark, and Switzerland differences are suggestive: suicidal rates have tended to be higher among the rich, the nonreligious, and the unmarried (including the widowed and divorced). Gender differences are dramatic: women are much more likely than men to attempt suicide; depending on the country, however, men are two to three times more likely to succeed. (Men are more likely to use foolproof methods, such as putting a bullet into the brain.). Age differences have vanished. The suicide rate among 15 to 24-year-olds has more than doubled since 1955 and now equals the traditionally higher suicide rate among older adults.
Suicide often occurs not when the person is in the depths of depression, when energy and initiative are lacking, but when the person begins to rebound, becoming capable of following through. Teenage suicides may follow a traumatic event such as a romantic breakup or antisocial act, and often are linked with drug and alcohol abuse. In the elderly, suicide is sometimes chosen as an alternative to future suffering. In people of all ages, suicide is not necessarily an act of hostility or revenge, as many people think, but a way of switching off unendurable and seemingly inescapable pain.
Social suggestion may also initiate the final act: known suicides as well as fatal auto “accidents” and private airplane crashes increase following highly publicized suicides. |
What do students at fin think about facebook? can you learn a foreign language on facebook?
"I can talk with my friends"
"Yes. It's possible because all words on Faceobok are English words. We might learn a lot of foreign languages for example when we want to translate some text which is in another language"
"Yes, it's possible... We can talk to a person who isn't near."
"While using Facebook we can read many interesting articles about things that we want to find and learn more about. Most of those articles are in English language"
"I can connect with my friends, and share videos or pictures."
"I have information about my friends on Facebook..."
"I can meet people all around the world..."
"I only use it for the university"
"It is possible to learn a foreign language like English language or Arabic. I have so many friends from Egypt. I learn with them Arabic via Facebook or Skype. I think that is a good method to quickly learn any language."
"I like it because I can correspond with friends."
"I myself have friends from another country, same age or older ones, and we use Facebook to stay in touch because I found them to be very kind people, same goes for them."
"It is possible to learn a foreign language if you have a foreign friend. It is good for your vocabulary, but you will not learn grammar."
"It is possible to learn a foreign language on Facebook because it's a place where you can meet people from other countries and most people around the world use English as their second language."
"I think it is possible to learn a foreign language on Facebook but not on a high level."
"I like Facebook because you can easily stay in contact with your friend from all over the world."
"You can connect with people from the whole world! You can learn so much by watching how people from different countries live."
"People know my private business..."
"Our private information is not protected enough..."
"We spend too much time on Facebook."
"I think that it isn't possible because we don't have a lot friends from other countries...we have just one or two friends. When we chat, we always chat in Bosnian language."
"There are many two-faces on Facebook. You write things about yourself that are not true, or some Photoshop pictures."
"It is a waste of time..."
"Many people use it too often, where they exchange their real life for the a fake one."
"People give information to others like a lot of private thins from life and communication by Facebook makes people closed and confused in society."
"I don't like Facebook because people post all parts of their life on Facebook and there is no privacy."
"If you are chatting with an uneducated person you can learn it in a wrong way..." |
One in five parents with credit cards said their children have made unauthorized purchases, lending credence to concerns that adolescents will run wild on shopping sprees or rack up in-game Fortnite purchases. But it doesn't have to be that way. Parents who want to help their children handle money responsibly and build wealth early can start by adding adolescents as authorized users to their accounts.
However, only about one in 10 parents of adolescents in the U.S. give their children credit cards or add them to the parents' accounts, according to a CreditCards.com survey. And given that about one in five Americans don't have enough credit history to earn a credit score, parents wouldn't want their children to end up in that group because they could get shut out of lending and housing opportunities later down the line.
Children are twice as likely to carry plastic if their parents are higher-earners, with household incomes of $50,000 or more per year. The survey also found regional variations, with 13 percent of parents living in the Northeast reporting that at least one of their children has a credit card, versus 8 percent in the South and 5 percent in the Midwest.
"So long as your credit record is good, that will transfer to your kids and really help them," said Ted Rossman, industry analyst at CreditCard.com.
Parents who are worried about their children overspending can provide a lot of training wheels, Rossman said. They can look into credit cards with limits on spending or start out with prepaid debit cards. Parents can also screen spending by setting up alerts for purchases or by cutting out certain stores.
Giving children credit cards early on is also a good opportunity to teach them how to manage money in a digital age, instead of relying on the $5 weekly allowance in cash. "Kids now live in a plastic-first environment, so it's good to expose kids to plastic appropriately," said Rossman.
At the same time, more than half of parents will give their children cellphones, though in the digital age, that may be the same thing: "Parents need to be aware that if high schoolers have a cellphone, that impacts finances as well," said Rossman. "Even if they don't have a credit card, they might have your credit card stored on Amazon or an app." |
Traditionally, the cultivation of Pleurotus sajor-caju is performed on different composted and pasteurized agricultural residues. The objective of this study was to investigate whether traditional composting and pasteurization processes could be replaced by washed and supplemented (mineral or organic) sugarcane bagasse. In one experiment, fresh sugarcane bagasse was immersed in hot water at 80°C for two hours (control) or washed in fresh water for one hour using an adapted machine for residue treatment. In another experiment, fresh sugarcane bagasse was washed in fresh water (control), and supplemented with corn grits (organic supplementation), or supplemented with nutrient solution (mineral supplementation). In the first experiment, the washed bagasse presented a average biological efficiency (ABE) of 19.16% with 44% contamination, and the pasteurized bagasse presented a ABE of 13.86% with 70% contamination. In the second experiment, corn grits presented the poorest performance, with a ABE of 15.66% and 60% contamination, while supplementation with the nutrient solution presented a ABE of 30.03%, whereas the control of 26.62%. Washing fresh sugarcane bagasse could suppress the pasteurized substrate in Pleurotus sajor-caju production, compensating a reduced ABE with a faster process.
Tradicionalmente, o cultivo do Pleurotus sajor-caju é realizado utilizando-se diversos resíduos agrícolas, precedido dos processos de compostagem e pasteurização. O presente trabalho teve por objetivo comparar o processo de pasteurização com a lavagem do bagaço de cana-de-açúcar e avaliar formas de suplementação do bagaço, visando aumento na produtividade. No primeiro experimento, os colmos da cana-de-açúcar passaram por moenda para a extração do caldo, sendo em seguida desfibrados. No tratamento controle, o bagaço fresco foi pasteurizado em água a 80°C durante 2 horas e o outro tratamento consistiu na lavagem do bagaço fresco em centrífuga com água corrente à temperatura ambiente, por uma hora. No segundo experimento, utilizou-se a lavagem simples (tratamento controle), a suplementação do bagaço lavado com quirera de milho (suplementação orgânica) e com solução mineral (suplementação mineral). A eficiência biológica média (EBM) do cogumelo no bagaço fresco lavado (19,16%) não diferiu significativamente da obtida no bagaço fresco pasteurizado (13,86%), sendo a sua contaminação (44%) menor do que no bagaço pasteurizado (70%). No segundo experimento, a suplementação orgânica obteve o menor desempenho, com EBM de 15,66% e contaminação de 60%, diferindo da suplementação mineral e controle, com EBM de 30,03% e 26,62%, respectivamente. A lavagem do bagaço de cana-de-açúcar fresco poderá suprimir a pasteurização do substrato na produção de Pleurotus sajor-caju, compensando a eficiência biológica reduzida com a agilidade do processo.
Several agricultural residues have been used to produce the edible mushroom Pleurotus sp., also known as "oyster mushroom", "hiratake", "shimeji", or "houbitake" (Mizuno & Zhuang, 1995; Bononi et al., 1995). Among these residues, the use of sugarcane bagasse allows a byproduct to be utilized in the production of a food of high nutritional value, with a protein content of up to 40% in dry matter (Rajarathnam & Bano, 1989). The abundant supply of this agricultural surplus turns Brazil into a country with a great mushroom-producing potential, because using 25% to 30% of the bagasse produced by Brazilian sugar/alcohol mills (25 million tons) world's mushroom supply could be doubled (Ferreira, 1998).
Among the substrates used to produce Pleurotus sp., are worth mentioning: rice hulls mixed with cotton residues for the production of Pleurotus sajor-caju (Fr.) Singer (Chang et al., 1981), banana leaf, mixed with sugarcane bagasse or corn cob for the production of Pleurotus sp. (Sturion & Oetterer, 1995a), and also cassava residues with sugarcane bagasse for the production of Pleurotus ostreatus (Jacquim Fries) Kummer (Felinto, 1999).
Substrate supplementation is a practice that has been used to produce Pleurotus sp. in order to increase productivity, evaluated through biological efficiency in several reports (Chang, 1980; Madan et al., 1987; Sturion & Oetterer, 1995a; Dhanda et al., 1996). Among various tested supplements, mulberry leaves and stalks were used in rice hull supplementation for the production of Pleurotus sajor-caju (Madan et al., 1987), while wheat bran and calcium carbonate were used in sugarcane bagasse supplementation for the production of Pleurotus sp. (Maziero et al., 1992).
The methodology for substrate preparation described in several studies consists in composting agricultural residues, followed by pasteurization, which can be carried out in different ways. The most common process is the use of vapor injected into chambers or tunnels, where the substrate is packaged and pasteurization time varies as a function of the temperature (Zadrazil, 1980; Abe et al., 1992; Mansur et al., 1992, and Maziero et al., 1992). Other forms of pasteurization include immersion of the substrate in hot water (Stamets, 1993; Balasubramanya & Kathe, 1996), and substrate sterilization in autoclaves (Zanetti & Ranal, 1996).
The main difficulty for Pleurotus sp. cultivation is the substrate disinfestation stage, performed by pasteurization or sterilization (Wizentier et al., 1996). Therefore, the introduction of a new methodology to produce substrate that would exclude the pasteurization stage becomes interesting, since it would allow mushrooms to be grown by a larger number of producers, with reductions in costs and production time, facilitating management.
The objective of the present work was to evaluate production and quality of the edible mushroom Pleurotus sajor-caju cultivated on fresh and washed sugarcane bagasse, supplemented with a nutrient solution or with corn grits.
Production of inoculum: the production of inoculum in Petri dishes and its conservation in test tubes was performed according to Bononi et al. (1995).
Production of spawn: corn grain was cooked for 15 minutes, drained and cooled, and 0.5% calcium carbonate were added in relation to their mass (Gabrielli et al., 2002), then transferred into 25 ´ 35 cm clear polypropylene bags, with a mean thickness of 0.6 mm, containing a 1.5 cm diameter hole as an air passage on its upper portion (2 cm below the edge), sealed with Micropore® tape (2 cm length ´ 1 cm width). The bags were sterilized, inoculated, and incubated according to Bononi et al (1995).
Fresh bagasse was used as the control treatment, packaged in cotton bags and submitted to pasteurization in water at 80°C for 2 hours in a 1,000 liter capacity container (Bahukhandi & Munjal, 1989; Balasubramanya & Kathe, 1996). The bagasse was then cooled down and drained in a hydraulic press until a mean moisture of 60% calculated by drying 100 g wet bagasse in an oven at 70°C until constant weight, with three replicates per sample. The pasteurized substrate was manually packaged into 30 ´ 40 cm clear polyethylene bags of mean thickness 0.15 mm, with 665 g wet bagasse per bag, together with 35 g spawn (5% in relation to the wet mass of the substrate).
The other treatment consisted of fresh bagasse washed in running water at room temperature for one hour, in a device used for washing cassava billets. This device consists of a cylindrical sieve that makes rotary movements, performed by means of a smooth belt, with a water inlet that allows the material to be washed inside the cylinder. After washing, the bagasse was drained and packaged as described for the control treatment. Both treatments were taken to a growing-room constructed in brickwork and clay roofing tiles, and then laid on wooden shelves 70 cm above ground. An 80% shade cloth was used to seal the window and door, in order to reduce moisture loss from the environment and avoid the access of insects that could be harmful during cultivation. Temperature in the environment ranged from 20-25°C, and relative humidity 70-90%. The temperature and humidity in the environment were adjusted by a microprocessor-based thermohygrometer, and aeration was controlled by means of an exhaust fan that was turned on 1 hour/day during the mushroom production stage. Mushrooms were collected during three flushes over a 50-day period.
Production on supplemented substrate: the chamber and shelves used in the previous experiment were washed with water and neutral dishwashing detergent. The environment was next sprayed with a Bordeaux mixture (10 liters water mixed with 100 g quicklime and 100 g copper sulfate) to disinfect the site. The growing-room was left to rest for a 2-day period after spraying.
The same washing procedure previously described was used to obtain the bagasse, which was divided into three portions. In for control the washed bagasse was manually packaged into 30 ´ 40 cm clear polyethylene bags (mean thickness 0.15 mm), with 465 g wet bagasse per bag (mean humidity of 60%), together with 24 g spawn (5% in relation to the wet mass of the substrate). The second treatment received organic supplementation, adding manually 45 g of corn grits (cooked for 15 minutes) to 420 g wet bagasse and 24 g spawn when the substrate was packaged into the polyethylene bags. The third treatment was prepared as described for the control, and received mineral supplementation after 10 days of incubation in the growing-room by addition of 20 mL nutrient solution to each plot, injected through the upper part of the bags when they were partially colonized by mycelium. The nutrient solution was adapted from hydroponic curly lettuce cultivation (IAC, 1996), with the following composition in 200 mL distilled water: 7.5 g calcium nitrate; 5.0 g potassium nitrate; 1.5 g monoammonium phosphate; 4.0 g magnesium sulfate; 0.25 mL EDTA iron; and 1.0 mL micronutrients solution (5.0 g manganese sulfate; 0.5 g zinc sulfate; 1.0 g boric acid; 0.2 g copper sulfate; and 0.2 g sodium molybdate in one liter of distilled water).
The treatments thus prepared were taken to the growing-room, where they remained during the incubation and harvest periods, under the same conditions of the first experiment.
Production on washed substrate - samples were distributed completely at random, with two treatments (washed and pasteurized) and 10 replicates. The test used to compare means was a nonparametric rank-sum test (Wilcoxon test) at 5%. The software used for the statistical analysis was the Statistical Analyses System (SAS Institute, 2000).
Production on supplemented substrate - samples were distributed completely at random, with three treatments (control, organic supplementation, and mineral supplementation) and 10 replicates. The Tukey test was used at 5% (SAS Institute, 2000).
Contamination: quantification of samples that stopped producing due to contaminations that occurred during the experiment, expressed as percentage.
ABE % = total wet mass of mushrooms/dry mass of the initial substrate x 100 (Chang et al., 1981; Maziero et al., 1992). The total wet mass of mushrooms was obtained by the sum of yields recorded during three flushes; dry mass of the initial substrate was calculated by subtracting the mean moisture in the bagasse (60%) from its wet mass in each treatment.
Brix degree determination in natural, pasteurized, and washed sugarcane bagasse, based on the average of three subsamples.
Analysis of macro- and microelements present in washed sugarcane bagasse and in the mushroom Pleurotus sajor-caju produced on washed substrate without supplementation, based on the average of three subsamples.
The spawn run on the substrate could be observed from the third day of incubation in the growing-room, with the formation of light pink halos around the spawn, indicating the beginning of degradation of the substrate by the fungus. The natural induction of primordia on the washed and pasteurized substrates occurred between 15-17 days of incubation, and the first flush or harvest occurred after 20 days of incubation. The mushrooms sprouted in clusters, and had the grayish-brown color that is characteristic for the species (Stamets, 1993). The second and third flushes occurred 15 and 30 days after the first yield, respectively, and lasted 7-8 days. This behavior was similar in all plots, except in those that stopped producing due to contamination of the substrate by competing microorganisms. Therefore, one harvest was obtained every 15 days, totaling a period of 50 days between the beginning of mycelium formation and the third flush, after which the substrates were discarded.
The Shapiro-Wilk test was used to test the normality hypothesis, required to compare average biological efficiencies; the statistic W=0.8217 (P = 0.002404) revealed that the residues lacked normality, considering a 5% significance level. Therefore, the Wilcoxon test was used to make the comparisons, with a statistic value of W=24. Considering the corresponding value of P = 0.09472 and a 5% significance level, the hypothesis that the average biological efficiencies were equal was not rejected, indicating that the treatments were not different (Table 1).
Different types of substrates have been used to grow Pleurotus sajor-caju in several papers, with ABE values from 32.10% to 79.18% (Chang et al., 1981; Bahukhandi & Munjal, 1989; Colauto & Eira, 1995; Sturion & Oetterer, 1995a; Dhanda et al., 1996). The low average biological efficiency in the washed and pasteurized treatments, as well as the biological efficiency variation, as indicated by the coefficient of variation and the standard deviation of the mean (Table 1), could be partially attributed to the loss of samples resulting from contaminations that occurred during the 2nd and 3rd flushes. The intrinsic variability of the biological material, its phenotypic plasticity, and the type of substrate used could also have influenced the results.
Contaminations may occur in most cultivations, because the mycelium becomes weaker after successive cultivations, or due to inappropriate management (Ferreira, 1998). Here contamination of the pasteurized substrate occurred in almost all samples (70%), usually after the 1st flush; in the washed substrate, the contamination percentage was 44.44%, appearing during the 3rd flush. Wizentier et al. (1996), working with the same species grown on sugarcane bagasse after juice extraction at 58°C (mill bagasse), bagasse stored for 30 days, and sterilized mill bagasse, recorded substrate contaminations of 30%, 10%, and 10%, respectively, with different microbiotas with regard to type and amount, and similar mycelium production velocities in all treatments. In spite of the greater contamination in mill bagasse, the authors suggested that this substrate is viable to be used in the production of Pleurotus sajor-caju, without pasteurization or sterilization of the substrate.
According to Balasubramanya & Kathe (1996), the microorganism species that competed with Pleurotus sp. after pasteurization with hot water (80°C for 2 hours) were the fungi Penicillium sp. and Trichoderma sp., probably due to the partial breakdown of cellulose and hemicellulose, making them available to competitors. Pasteurization at 90°C could make cellulose more available (Sturion & Oetterer, 1995a), due to the partial destruction of the lignin-cellulose bonds, favoring substrate contamination. Thus, contamination of the pasteurized substrate could have occurred because of the temperature and time used during pasteurization, since the literature is quite variable with reference to these characteristics (Bahukhandi & Munjal, 1989; Stamets, 1993; Balasubramanya & Kathe, 1996; and Sturion & Ranzani, 1997).
In the traditional cultivation of edible mushrooms, composting has the function of digesting simple sugars by the action of microorganisms present in the residues that make up the substrate, in addition to making some nutrients available in the biomass, and making it more homogeneous (Rajarathnam & Bano, 1989). The competing microorganisms present in the compost are partially eliminated during the pasteurization process, or totally eliminated by the sterilization process (Bononi et al., 1995). The compost thus obtained is selective, reducing the potential for contamination during cultivation, but generating higher costs in labor and facilities, and demanding more time with respect to bagasse washing.
The mean soluble solid content values were 16.5° Brix for the fresh sugarcane bagasse used in the experiments, 1.8° Brix for the pasteurized bagasse, and 0.3° Brix for the washed bagasse. The low amount of soluble solids, especially simple sugars, caused by washing the freshly-obtained bagasse, is important to reduce surface contamination, since the conventional pasteurization process has been suppressed. Therefore, sugarcane bagasse washing can be used for Pleurotus sajor-caju cultivation, as long as the bagasse has been recently obtained and used, avoiding the natural fermentation of the stacked material. When the percentage of losses resulting from contaminations and the cost of the disinfestation process for the cultivation substrate are compared, the results here presented indicate that the washing technique is promising for the production of this mushroom.
The chemical composition of the mushroom and of the substrate used for growing shows that Pleurotus sajor-caju is effective in concentrating N, K, P, Mg, S, Na, Fe, Zn, and Cu in their fruit bodies (Table 2). This makes mushrooms of this genus to be good sources of minerals, in addition to having low calorie contents, with little digestible carbohydrates and a small amount of lipids (Sturion & Ranzani, 1997).
The most abundant mineral in mushrooms is potassium, comprising between 56% and 70% of the total ash in the organic matter, followed by phosphorus, sodium, calcium, and magnesium (Chang & Miles, 1984). For comparative purposes, the amounts of minerals in Pleurotus sp. recorded by several authors were transformed to % (K, P, and Mg) and mg g-1 (Na, Ca, Fe, Zn, Cu, and Mn) in dry matter (D.M.). Thus, K (2), P (0.75), and Mg (0.15) were the major constituents in Pleurotus species, while Ca and Fe were present at low concentrations in the D.M., with 1,200 and 500 mg g-1, respectively, according to Bano & Rajarathnam (1988). In a review made by Buswell & Chang (1993), the following values were found: K (3.3 to 5.3), P (0.76 to 1.08), Na (1,650 to 1,840), Ca (200 to 240), and Fe (60 to 2,240) for Pleurotus sajor-caju grown on several substrates, while Justo et al. (1998) obtained values of 0.5 to 0.95 for P and 7,900 to 18,500 for Ca in three Pleurotus ostreatus strains grown on wheat straw. In the cultivation of Pleurotus sajor-caju on banana leaf and sugarcane bagasse, however, Sturion & Oetterer (1995b) found the following mean values: K (0.99), P (0.70), Mg (0.13), Ca (400), Fe (175), Zn (35), Cu (12), and Mn (12). Thus, from the values in Table 2 and those found in the literature, a variation can be seen in the analyzed minerals; the inferring that the type of substrate and the species used in the cultivation have an influence on the fungal chemical composition.
The concentration of elements in the mushroom (Table 2) can be better observed because it is collected from washed sugarcane bagasse, where no interference of supplementation was found. Thus, the concentration of N and minerals in the mushroom occurs because of the fungus metabolism, which could be correlated with other mechanisms, such as nitrogen fixation by Pleurotus sp. (Ortega et al., 1992; Sturion & Oetterer, 1995a; and Patrabansh & Madan, 1997) and the occurrence of microorganisms associated with mushrooms of this genus, similar to the bacterium Burkholderia, which could also be related with nitrogen fixation in this system (Yara, 2002).
Production on supplemented substrate: after installation of the experiment, the induction of primordia occurred in all treatments, between 14-17 days of incubation, with three flushes. The first occurred after three days from primordium formation, with a 7-day interval between flushes; one harvest was obtained every 15 days, totaling a 50-day period between the onset of mycelium formation on the substrates and the last harvest.
The Shapiro-Wilk test was run in order to verify the normality assumption, required to perform analysis of variance, with a W value of 0.9824 (p=0.8854); therefore, the normality hypothesis was not rejected. Organic supplementation obtained the smallest ABE value (15.66%), being statistically different from the control (26.63%) and from mineral supplementation (30.03%), according to Table 3. This result may have occurred due to the contamination caused by the manual introduction of cooked corn grits, so that organic supplementation did not respond satisfactorily to this production methodology. Contamination for the washed substrate with organic supplementation was 60%, hurting productivity and interrupting production after the 2nd flush. However, in traditional mushroom production, organic supplementation with nitrogen-rich residues, such as soybean bran, is frequently used, and according to Permana et al. (2000), it provides superior results when compared with supplementation based on inorganic nitrogen sources, such as ammonium nitrate and calcium nitrate.
Zanetti & Ranal (1997) used pigeon pea at different percentages as a supplement to sugarcane bagasse in the production of Pleurotus sp. "Florida", and the best result was obtained with the incorporation of 15% pigeon pea, with an ABE of 94.73%. Zadrazil (1980) used wheat straw supplementation with soybean bran and alfalfa, increasing the productivity of Pleurotus sajor-caju by 300%; the author also used supplementation with ammonium nitrate, increasing productivity by 50%. According to these authors, the addition of nitrogen to an alkaline substrate stimulates the formation of mycelium and the production of mushrooms. However, excess organic or mineral nitrogen may inhibit the synthesis of lignin-degrading enzyme (Bisaria et al., 1997), causing a decrease in productivity, a fact also observed by Macaya-Lizano (1988) in his work with Pleurotus sp., grown on several residues and supplemented with cotton meal or soybean bran.
The mineral supplementation of washed bagasse prevented the development of contaminants on the cultivation substrate, contrary to organic supplementation, not differing, however, from the control treatment with respect to ABE. Several authors, notwithstanding, used inorganic sources in the supplementation of various substrates, increasing Pleurotus sp. productivity (Zadrazil, 1980; Bisaria et al., 1997; Permana et al., 2000). Despite its low biological efficiency, washing fresh sugarcane bagasse is a promising technique, compensating yield by means of a reduction in time and costs with infrastructure and labor.
The use of mineral solution on washed bagasse did not promote the development of contaminants, and could be perfected and used in the supplementation of this substrate. |
The Black-and-chestnut Eagle is one of the largest eagles found in the Andes mountain range, only slightly smaller than the Crested Eagle and Harpy Eagle. They can be up to 80cm long with a 180cm wingspan.
The young (juvenile) bird is white with some brown, getting darker as it ages. After four years it will have full adult plumage, mostly black with chestnut breast and leg feathers with grey tail and primary feathers. The juvenile birds are attractive and less wary of photographers, so there are more photographs of them – although the adults do soar above the tree line, so they are seen more often than other large eagles such as Harpy.
Both sexes look similar, with prominent crests of up to 10cm, but the female is larger, with a longer tail.
Prey consists of medium sized mammals such as squirrels, monkeys, guans and chickens.
Females lay a single white, brown-spotted egg in a nest built out of sticks and placed high in a tree.
Very little is known about this Endangered species, so the chance to study nests at the sites supported by World Land Trust through Fundación EcoMinga have yielded interesting reports.
Black-and-chestnut Eagles are found in dense, undisturbed montane forest on the slopes of the Andes. Restricted to the east of South America, interestingly this eagle has a vast yet narrow range, found from Venezuela, through Colombia, Ecuador, Peru, and Bolivia to Argentina.
The Black-and-chestnut Eagle is classified as Endangered on the IUCN Red List of Species due to its small population size (thought to be less than 1,000 mature individuals). The biggest threat facing the eagle is the loss of its montane forest habitat. Its tendency to hunt domestic chickens can also lead to persecution by humans. |
Causes directed to enforce the selling price.
Besides these economic factors, there are certain non-economic factors, which have also influenced the formation of combinations. Now, we shall discuss some of the principal causes for the growth of the combination movement.
Large-scale production and intense competition have become the rule of the present day economy. Cutthroat competition leads to wasteful advertising, unnecessary duplication, over production etc., which all ultimately result in lowering the profit margin of the industrialists. Under such circumstances, small units could not survive. Therefore, the only alternative available to the industrialists is the elimination of competition, which could be possible only through business combination.
Large-sale production has certain definite advantages. If different firms come together and form amalgamations, the scale of operation also become larger and savings in overhead charges can be effected.
The tariff policies of different countries have also furthered the causes of the combination movement. Tariff is often described as the “Mother of Combination“. By imposing high tariff on imported goods, the Governments throughout the world offered protection to home industries.
The protection offered by the state resulted in the establishment of a number of business units. Consequently, competition amongst them became tense and the need for business combination was felt.
Another contributory cause for the combination movement was the revolution in transport and development of communications. The development of transport facilities accelerated the growth of large-scale undertakings. The large undertakings began to absorb smaller units to cater to the needs of the local market.
The growth of joint stock companies has also facilitated combinations. Basically the company form of organization itself is a type of combination. Large companies with huge capital were able to control comparatively small companies by subscribing to their shares. Hence, holding companies came into being.
The tendency of business activities to fluctuate regularly between booms and depressions gave a fillip to business combinations. Particularly during the periods of depression, new units cannot enter into the industry and even the existing small and inefficient units cannot survive.
During 1930, when the Great Depression occurred, the situation became very awkward and the industrialists began to adopt the technique of business combination.
The technological development also paved way for large-scale operations. Small units with limited financial resources were found unable to compete with bigger ones. Hence, they realized the need for business combination.
Moreover, the adoption of modern techniques required huge capital investments, which small units could not provide. Therefore, they were forced to combine themselves to get the benefits of modernization.
Business Combination has also been fostered by patent laws. The inventors were given exclusive right of the use of their inventions. This statutory right also furthered the combination movement.
Men of technical skill of a superior order are less in number. The scarcity of business talent is also a cause for the centralization of powers in the hands of a few. Many combines have common directors, managers, which in effect would mean their common control.
The labour, fiscal, industrial and taxation policies of the Governments also influenced the formation of business combinations. The Government may even exert pressure on weaker units to merge with bigger ones.
Frequent changes in the policies of the Government also increased the uncertainty among the businessmen. The instability of the economic policies also encouraged the growth of the combination movement.
In fact, combination is the first step towards rationalization. The growth of rationalization movement encouraged the emergence of business combinations to a great extent.
The mid-nineteenth century brought in its wake the cult of the colossal-respect for bigness. People began to respect big things and there was a corresponding contempt for small things. The impact of this tendency was felt in the business field also. The glamour for giant undertakings captured the minds of the industrialists. This tendency also furthered the combination movement. |
The most noticeable changes are in the interface design appearance. The panels can be docked within an application frame window as shown below in Figure 1 (although it is easy enough to revert to the floating panel and document window behavior). There are some nice touches to the UI design, such as the way you can easily access different workspace settings from the new application bar at the top. I am not so keen on the all-caps panel headers, but if you set the interface preferences to Small UI, you are unlikely to be bothered much by this particular cross-product change in the UI design.
Figure 1. This shows the new Photoshop CS4 Application window program workspace for the Mac OS, showing the Window menu that allows you to switch between the classic mode workspace and Application Frame workspace shown here.
As usual, you can use keyboard shortcuts to select specific tools, but if you hold the key down instead, you can temporarily switch to using the tool associated with that keyboard shortcut. Release the key and you can revert to working with the previously selected tool.
Photoshop CS4 can now take advantage of OpenGL video processing, so long as you are using a video card that is OpenGL enabled. When this preference is switched on, zooming and scrolling images becomes a lot smoother and there are also several other little tricks that you can do when OpenGL is switched on. You can get a quick birds-eye view by holding down the H key as you click on the image. This will zoom out to show a full-frame view, where you can. Release the key and you return to a normal view again. Or, drag the cursor to a new area of the image to zoom in on and release the mouse key to zoom in on that new area. You can also flick pan images using a simple flick of the mouse. Plus you can also access the new Rotate tool, which allows you to swivel the angle of the image on the screen, thereby allowing you to retouch an image without having to turn your head sideways as you do so (see Figure 2)!
Figure 2. The new rotate tool.
Adjustment layers are now managed via an Adjustments panel (see Figure 3). While this may not seem a big deal at first, doing away with the modality of the Adjustments dialogs means you can now add adjustment layers and have immediate access to the adjustment settings. Imagine you have three different adjustment layers applied to an image. As you click on each, you can immediately access the adjustment settings. As you tweak the adjustments for an adjustment layer you can also go directly to the Layers panel and adjust the layer blending mode. There is a lot of scope here to work faster and more efficiently. Figure 3 shows the Adjustments panel list view where you can select an adjustment by clicking on one of the button icons. You can also select adjustment preset settings from the adjustments list. You will notice there is now a new Vibrance adjustment (just like the one in Camera Raw) and some of the adjustments, such as Curves will allow on-image adjustments.
Figure 3. The Adjustments panel in list view.
Color Range has been improved. There is new option called ‘Enable Localizes Clusters’ , which when selected can carry out more advanced calculations when you add and subtract using the selection eyedropper tools to refine a Color Range Selection. The net result is that Color Range has now become a very powerful color selection tool. When you link this with the new Masks panel feature, it is possible to build masks based on color that are much more accurate than anything you could have achieved before using Color Range.
Figure 4. The new Color Range selection dialog.
The Masks panel offers direct editing control over the shape of an active layer mask, so that you can dynamically adjust now the mask density as well as the feathering. When editing a layer mask there is a Refine Edge mask button that opens the Refine Edge dialog so that you can tweak the mask settings further. You could already do this in CS3 of course, but it’s now made more obvious in CS4.
You can now add a vignette in Photoshop by adding a darkening Levels (or Curves) adjustment, make an elliptical selection and fill the pixel mask with black. If you go to the Masks panel you can increase the Feather amount to make the hard mask edge softer. However, let’s say you wanted to soften the transition between the masked and unmasked areas. By decreasing the Density one can make the black areas of the mask lighter and thereby reveal more of the adjustment effect in the center of the image. This technique is not just limited to pixel masks and Figure 5 shows how I could just as easily use a subtractive elliptical pen path shape, apply this as a Vector mask and use the Masks panel settings shown here to soften the mask edge.
Figure 5. An example of the Masks panel being used to feather the edges of a hard-edge vector layer mask.
This is probably the star feature of Photoshop CS4, yet also the most controversial since it invites Photoshop users to tamper with photographs in ways that are likely to raise the hackles of photography purists. Does this spell the ‘death of real photography (DORP)? I don’t know, but Advertising and design photographers will at least appreciate the benefits of being able to adapt a single image to multiple layout designs. To use this feature, you need an image that’s on a normal layer (not a Background layer) and you simply go to the Edit menu and choose Content-Aware Scale. You can then drag the handles that appear on the bounding box for the selected layer to scale the image, making it narrower/wider, or shorter/taller.
Figure 6. This shows an example of where I used the Content-aware scaling feature to stretch the pengins further apart and add more sky to the image.
Photoshop CS3 users will have appreciated the advances made to the Photomerge blending. These allowed you to obtain perfect results when stitching panoramic images together. Well, Depth of field blending has taken this concept further. Basically, if you take a series of photographs where the point of focus is different in each shot, you can use a combination of the Auto-Align command followed by an Auto-Blend command, where the ‘Stack Images’ blend mode is used (rather than a Panorama blend). Photoshop then cleverly analyzes each image in the assembled layer stack to detect which portions are the sharpest on each layer and auto-masks them to create an extended depth of field blended image. Now there are some limitations to this technique. Photoshop can only make all areas of the picture sharp if there is sharp information in every portion of the image. It works quite well in the extreme example shown below in Figure 7, but it is more likely that photographers may use this to achieve enhanced focus where there are smaller differences in focus between exposures than the extreme example shown here.
Figure 7. On the left you can see an example of one of 5 images in a series of photographs taken at different focus settings. On the right you can see the result of a depth of field blend using Photoshop CS4.
There are a lot of other new features in CS4 that I’ll be going into more detail in my forthcoming book. For example, there is the ability to see a preview inside the clone stamp cursor as you retouch, drag resizing of cursors and Smart Object enhancements that allow Smart Object transforms to be linked to a layer mask. I was also impressed with the Configurator demo that John Nack did at Photoshop World recently. While I am not sure if this is going to be part of the final shipping product, I reckon it should become available soon via the Adobe.labs.com website.
Camera Raw editing has been updated too and everything you saw added in Lightroom 2 is now here in Camera Raw 5 for Photoshop CS4 (see Figure 8). This means that in addition to the Camera Raw capture sharpening that was added in the Camera Raw 4.1 update, you now have localized editing, negative clarity, post-crop vignetting plus the new Camera Profiles. In case you haven’t heard already, the new Camera Profiles allow you to apply different looks to your raw files to provide a variery of base-level settings before you start adjusting the Camera Raw sliders. In many cases this include a Camera Profile that matches the default JPEG rendering of the camera. Basically, if you don’t have Lightroom 2, Camera Raw 5 alone is reason enough to upgrade.
Figure 8. Here is the Camera Raw dialog (hosted by Bridge), showing the main controls and shortcuts for the single file open mode.
It is inevitable that comparisons will be drawn between Bridge CS4 and Lightroom 2. To be fair, Bridge is a file browser while Lightroom is (among other things) a cataloging program, so you shouldn’t really compare the two directly. The thing is, when it comes to the tasks that both programs do happen to share, Bridge CS4 is still in some ways lacking in speed and ease of use. Let’s deal with the positive aspects of Bridge first, because there have been some clear improvements and innovations here.
Figure 9. The Bridge CS4 interface.
As with Photoshop, we have a new interface and a task-based menu that can be used to switch between workspaces. This means that you can quickly switch from a folder navigation workspace to one that is suited for image metadata editing.
The Collections panel has made a return and includes ‘Smart Collections’, which you can use to build collections based on selected criteria. There is also an Auto-Collect feature that can cleverly analyze photos in a folder and stack them according to whether they are candidate images for creating panoramas or Merge to HDR image sets. It does this by analyzing if the photos were shot within an 18 second time frame and whether they overlap a little or a lot. It can be a little slow, but it does work automatically. You can also use the Process Auto Collections in Photoshop to automatically take the stacked images and process them in Photoshop (either as panoramas or Merge to HDR images). This is a feature that could do with some further refinement, but it is a promising start and will greatly appeal to photographers who shoot a lot of panoramas or HDR image sequences.
There is now a neat one-click preview option in Bridge CS4, where you just click with the spacebar, to make selected images appear full frame on the screen. You can then use the keyboard arrow keys to navigate through the selected images. It’s brilliantly simple and effective! What then is the point of the Bridge Review mode? (see Figure 10) This reminds me very much of the Cover Flow navigator in Mac OS X 10.5. Basically it is intended as a tool for browsing selections of images, but it’s an odd addition to the program given that the one-click previews allow you to do the same thing but more elegantly.
Figure 10. The Review mode.
The Output panel is designed to replace the previous Web Photo Gallery and Contact Sheet plug-ins. The good news is that when you generate a Web Gallery you can get to see a preview of how the gallery will look (but only up to the first 10 images) and all RGB files can now be converted to sRGB (which overcomes the color matching problems that dogged Bridge in the past, such as when the source RGB files were in Adobe RGB or Pro Photo RGB). The bad news is that the output models don’t make use of an image cache. This means it is still a frustratingly slow process to configure a gallery template. Even if all you do is change the name of the gallery title, Bridge has to convert every single image again in order to generate a new preview image. Even if you do find a configuration that you like, there is no way to save this as a custom template. The Web galleries are mostly all new, but some of the old favorites such as the Feedback template are unfortunately missing.
Similar problems beset the PDF output. Again, you have regenerate the preview to see the outcome of any layout changes that you make, you can’t save template settings and there is no direct print output button, which means going through the extra step of generating a PDF file and having to print from the PDF. Lastly, there is no draft print mode either, so everything has to be generated long-hand from the original master files. If you own Lightroom, you won’t be bothered by such shortcomings. However, there is another option. You can still manually install the previous Web Photo Gallery and Contact Sheet plug-ins and use these as before. Overall, the Output model is, in my view, a disappointment, but with some form of image cache management and some thought given to adding more interface options, there is no reason why it can’t one day in the future begin to match the speed and functionality of Lightroom.
Apart from these few niggles I would say that Photoshop CS4 is an excellent upgrade, not just for the features I have listed in Photoshop Camera Raw and Bridge but for lots of other significant little changes to the program. Whatever your interest in Photoshop this is in my view, an essential upgrade.
The above examples are highlights taken from the forthcoming Adobe Photoshop CS4 for Photographers book by Martin Evening, published by Focal Press (no release date can be given yet). This edition has been revised to provide detailed coverage of all the essentials in Photoshop plus what’s new in CS4. This edition is the biggest revision yet, with many more new image examples and provides greater detailed analysis of all the key areas of Photoshop that should be of interest to photographers. |
Have you ever watched yourself think about the sequence of what you plan to do? This process is called metacognition. Meta is “beyond” and cognition is “thinking”.
Let’s look beyond the thinking. It has two parts - knowledge and regulation.
Say you’re going out to run errands. You might arrange your stops in the order of their importance or their location. You’ve just used both knowledge of “what” to do and “regulated” those chores in terms of value.
Thinking about thinking is critical for academics and daily life. If you don’t know “what” to do or “how” to do it, nothing gets done at all!
A form of metacognition is metamemory which is the knowledge of how you use your memory and your memory strategies. When you devise a plan for learning the states and their capitals, you use metacognition. When you begin the actual process of remembering them, it’s metamemory.
Watch your child to see how they are using these tools. Help them along with your own strategies. |
In this post-synodal apostolic exhortation, Pope Francis restates official teachings on marriage and family life, reflects on the underlying values of these teachings and why these commitments continue to matter in our world today, and honestly acknowledges that a great many people are unable to live fully the ideals proposed by the church. To them, he offers words of encouragement and hope, instructing them to do their best in their unique circumstances and to commit to further growth in understanding what love is and what love requires.
Wonderfully complicated. It is a beautiful phrase: light and hope together with pragmatism and complexity. Christians are called to model their lives after Jesus, the one who showed mercy and tenderness. How? It’s wonderfully complicated.
I’m going to highlight three related themes that I find particularly noteworthy in Amoris Laetitia: the shift from legalism to personalism, the shift from deductive logic to a process model of discernment, and a shift from a “one size fits all” mentality to a “growth ethic.” Taken together, I believe we see the foundations for ongoing development of Catholic teachings on marriage and the family, the implications of which are still unfolding. My overall take-away is that Pope Francis is saying it is time for lay people to discern their deepest values and take responsibility for living them out; we need to see church teaching for what it is—a complicated messy (even imperfect) tradition trying to form people to make healthy choices that are good for society. So this document becomes a celebration of conscience and a rejection of a legalistic paradigm.
Repeatedly in the exhortation, the pope highlights the flaws of a rigid, legalistic paradigm. Instead, sometimes explicitly and sometimes with subtlety, he affirms a shift towards a personalist framework. A personalist framework focuses on the moral subject in his or her particularity instead of beginning moral reflection with attention to abstract norms. (See Charles Curran, “Chapter 4: Person” in The Catholic Moral Tradition Today: A Synthesis). Pope Francis reminds us that healthy family relationships are not based on strict rules that we blindly follow, but rather intimate relationships of love rooted in mutual care, interdependence, and tenderness.
Pope Francis tells readers that it is not enough for church leaders to stress “doctrinal, bioethical, and moral issues.” While this document does not challenge or undermine those received teachings (see the restatement of the central norms of Humanae Vitae at nos. 80-81 for example), the pope repeatedly reminds us to attend to the “complexity of various situations” (79). I interpreted this as a subtle shift akin to what Richard Gula described in Reason Informed by Faith as a move from a “classicist” to a “historically conscious” methodology, or what Todd Salzman and Michael Lawler (drawing on Janssens) describe as Vatican II’s shift to a theological anthropology that attends to the “human person adequately considered” (The Sexual Person: Towards a Renewed Catholic Anthropology). And when we pay attention to each person, we see that “everyone has something to contribute, because they have their life experiences, they look at things from a different standpoint and they have their own concerns, abilities, and insights.” (138). Furthermore, we are instructed to serenely contemplate “the ultimate fulfillment of each human person.” (166). “The deepest expectations of the human person” are stated via values and virtues not fixed essences: “a response to each one’s dignity and fulfillment in reciprocity, communion and fruitfulness.” (201). Stated crudely, a physicalist understanding of the natural law would focus on how in “God’s design” the male sexual organ fits in the female sexual organs; by contrast, a personalist framework wonders at and takes joy in the relationship between this woman [Sally], and this man [Pedro], marvels at the uniqueness of their friendship-love, and asks how the church can support them as they grow into ever more mature conjugal union. At the end of the day, we’re still talking about “God’s design for love,” but our framework has shifted to a personalist dimension that recognizes the uniqueness of each couple’s situation.
Thus the recipe for conjugal love is not exclusively restated in a canon law framework (for example, using language of contract, validity, indissolubility, procreation); instead one gets the impression that Pope Francis cares about the quality of people’s intimate relationships. His use of the natural law focuses not on fixed essences but on the “natural inclinations of the human person” to love (123, 131,143). And he explains that “marriage was not instituted solely for the procreation of children” but also that mutual love might grow and mature. (125). “The acts proper to the sexual union of husband and wife correspond to the nature of sexuality as willed by God when they take place “in a manner which is truly human.” (154).
It is remarkable, then, how Pope Francis honors the good qualities of those ‘irregular’ unions that nevertheless approach the goodness of God’s intent, seeing how couples might not be able to fully live church teaching even though their relationships contain “deep affection, responsibility towards the children and the ability to overcome trials” (78). Later he says that some unions realize the ideal in a “partial and analogous way” (292). In other words, he is celebrating the good within those relationships without saying that they must all conform to a model of perfection.
I certainly understand why some readers believe the document does not go far enough, given that there is still an implied hierarchy of ‘normative’ over ‘irregular’ unions. But I think it is still valuable to point out how Francis is downplaying a legalistic paradigm in favor of a personalist one. Revisionist theologians have explained the liberating implications of such a move for Catholic sexual ethics. While official teachings remain unchanged with regard to marriage, the Synod’s efforts to listen to the struggles of the faithful have made a difference in the tone of this document.
“It is reductive simply to consider whether or not an individual’s actions correspond to a general law or rule, because that is not enough to discern and ensure full fidelity to God in the concrete life of a human being.” (304).
Here he is saying that discipleship is not about following rules but rather about an intimate relationship with God and living out the gospel. Then the pope cites Aquinas, explaining that the further one descends into particulars, the messier it gets. This is a famous passage often quoted in discussion of the natural law, especially after the publication of Humanae Vitae in 1968. The pope reminds us that general principles (e.g. ‘responsible parenthood’) are essential to communicate. But the application of that general principle in a particular couple’s life is not as straightforward as some might think. It requires discernment, not deductive logic.
The pope laments that church leaders “find it hard to make room for the consciences of the faithful,” who are “capable of carrying out their own discernment in complex circumstances.” (37). He repeatedly admonishes pastors who fail to understand that couples must discern for themselves how God is calling them to live out their faith.
“For this reason, a pastor cannot feel that it is enough simply to apply moral laws to those living in ‘irregular’ situations, as if they were stone to throw at people’s lives…. Discernment must help to find possible ways of responding to God and growing in the midst of limits.” (305).
“Discernment must help to find possible ways of responding to God and growing in the midst of limits. By thinking that everything is black and white, we sometimes close off the way of grace and growth, and discourage paths of sanctification which give glory to God.” (305) The pope reminds us that the moral life is not black and white, but also that human freedom must be exercised with thoughtfulness. So for example it is not enough for a pastor to sit in front of a room of couples preparing for marriage and tell them not to use birth control. The moral life is not about conforming to particular rules. Instead, as I wrote in chapter 7 of my book, couples should be invited to discern how God is calling them in their own particular circumstances. (For a great book on conscience and discernment I highly recommend Kathryn Cox’s Water Shaping Stone). We should be cultivating the freedom of lay people, not speaking about the moral life as if it requires blind obedience to authority.
The final theme I wish to raise up has to do with the “law of gradualness” in the apostolic exhortation. The pope admits that the moral life is complex and there is no “one size fits all” answer to marriage and family questions today.
“If we consider the immense variety of concrete situations such as those I have mentioned, it is understandable that neither the Synod nor this Exhortation could be expected to provide a new set of general rules, canonical in nature and applicable to all cases. What is possible is simply a renewed encouragement to undertake a responsible personal and pastoral discernment of particular cases, one which would recognize that, since ‘the degree of responsibility is not equal in all cases,’ the consequences or effects of a rule need not necessarily always be the same.” (300).
“Along these lines, Saint John Paul II proposed the so-called “law of gradualness” in the knowledge that the human being “knows, loves, and accomplishes moral good by different stages of growth.”… Each human being “advances gradually with the progressive integration of the gifts of God and the demands of God’s definitive and absolute love in his or her entire personal and social life.” (295). |
You have to decide on the LEDs you want to use. Depending on the type, you have to use different resistors R6, R7, R8 for the LEDs. See further down for a recommended one.
The capacitor C2 is only to buffer the input voltage in case you connect LEDs that draw much power and you want to avoid problems with some programmers that may have problems flahing when the voltage changes much.
10 uF Capacitor: The marked line is -, which points to the mid of the PCB. On the PCB, + is labelled.
The device has an internal voltage regulator with 3.3V output to power the ATMega and the RFM12B. It's recommended to power the whole device with a 5V power supply. You can use a cheap one that is meant as phone charger. They have typically 500mA output current.
with U_in the input voltage of the device (e.g. 5V), U_LED the voltage your LED needs (e.g. 2.8V) and I_LED the current you want to do through the LED (e.g. 250mA).
One LED type that is very bright with a moderate current are the Cree XP-E LEDs. They are available as a module in the smarthomatic shop. The module makes assembling easier and ensures good cooling.
The “normal” resistors are enough if you use a thin glass housing (as shown on the homepage). You can use normal 1/4 W resistors and the LED module won't heat up much. No heat sink required. This is the safest and easiest choice.
The “bright” resistors result in LED currents of ~120mA and a power dissipation at the LED module of ~1W. It should not need an additional heat sink, but it gets hot already (you can touch it, maybe ~50°C). The resisors have a calculated power dissipation of ~0,3W. 1/4W resistors max be enough, but I recommend to use metal oxyde resistors (1W). The overall current of the RGB dimmer is 380mA, so a typical power supply for smartphones with 500mA max. current fits perfectly.
With the “max” resistors, you definitely need an extra heat sink at the LED module and resistors which can cope with the higher current (1W).
with U_Res the voltage at the resistor. |
BRUSSELS – In her latest speech on Brexit, British Prime Minister Theresa May rejected the prospect of the United Kingdom remaining in the European Union’s customs union, on the grounds that the UK wants its own trade policy. This is not in the best interest of either the UK or the EU.
It is true that Norway and Switzerland, both of which are highly integrated into the EU market, have customs borders with the bloc. These countries need an independent commercial policy to provide greater protection than the EU offers to their domestic agricultural sectors, which in both cases can never be efficient, owing to mountainous terrain.
Yet the UK has traditionally been much less protective of its agriculture, and is thus likely after Brexit to pursue a commercial policy that is very similar to that of the EU, anyway. It is therefore difficult to see what the UK would gain from pursuing a national trade policy – especially at a time when the United States, under President Donald Trump, is pursuing policies (such as imposing tariffs on imported steel and aluminum) that show little regard for its smaller trade partners.
The truth is that the main impediment to a post-Brexit customs union is political. As Labour leader Jeremy Corbyn, who supports remaining in the customs union, has emphasized, a country with the heft and influence of the UK cannot be viewed as merely following EU decisions, over which it has no influence. Yet this problem can be solved – or, rather, finessed.
The UK’s demand to weigh in on EU decisions can and should be accommodated, with experts from the UK included in the committees that decide trade policy. Those experts would have no voting rights, but they would be able to shape decision-making. The EU already has similar arrangements with Iceland, Norway, and Switzerland regarding matters relating to the Schengen Area.
Formal decision-making power is of course another matter. The EU’s legal structure cannot allow a non-member state to participate in binding decisions. This calls for something of a gentlemen’s agreement, with the EU pledging to take UK interests into account when making trade-policy decisions.
If the UK remains in the existing EU customs union – as is foreseen for the transition period – rather than negotiating a new customs agreement with the EU, that gentleman’s agreement would also extend to new trade agreements that the EU concludes with third countries. After all, such agreements would apply explicitly to the EU’s entire customs territory – a term with a precise meaning under WTO rules. So, whatever market-access benefits they include would automatically apply to the UK.
As a gesture of goodwill, the EU should also support the UK’s efforts to “grandfather” its market access resulting from existing EU free-trade agreements and thereby avoid the need to renegotiate each and every deal. The legal argument would be that the EU customs territory has not changed, so existing EU trade agreements must continue to apply to the UK. But this argument could be contested, leaving UK exporters suddenly confronting tariffs and other trade barriers.
European Commission officials could dismiss that as the UK’s problem. But such a response would run counter to the spirit of the European Council guidelines of April 2017, which call for “a constructive dialogue” with the UK “on a possible common approach toward third-country partners.” Such a constructive approach would include steps – like supporting the grandfathering of trade agreements – that minimize friction during the transition period.
Remaining in the EU’s customs union would leave the UK in a much stronger position than, say, Turkey, which, despite having concluded an agreement to create a customs union with the EU, is not actually part of the bloc’s customs territory. As a result, third parties do not automatically have to grant Turkish exporters EU-level access to their markets. Instead, Turkey must try to persuade third countries with which the EU has concluded trade deals to do so.
Turkey has usually succeeded. But it enters such negotiations in a weak and somewhat awkward position, because it is required, per its agreement with the EU, to grant to the third country all of the concessions the EU has made, whereas the third country has no legal or political obligation to reciprocate.
For the EU, agreeing to take the UK’s interests into account in future trade negotiations should not be viewed as a concession, because it is in the EU’s own long-term interest. After all, if the EU can offer de facto access to the EU and UK markets – which, together, are 20% larger than the EU market alone – its negotiating power is significantly strengthened.
In this sense, keeping the UK in the EU customs union would help to preserve the EU’s global standing in trade. And while many in the EU, especially the European Commission, would like to have their cake and eat it – keeping the UK in the customs union, while ignoring its interests – that is simply not an option.
The alternatives available for the EU is either to see the UK to leave its customs union, or to keep the UK in, by making a political commitment to take British interests into account. From a long-term perspective, the latter is preferable.
Finally, remaining in the EU customs union would make it possible to avoid reestablishing a hard border between the UK and the Republic of Ireland after Brexit. Though May has agreed that avoiding a hard border should be part of any deal, she has offered only vague suggestions concerning how that could actually be achieved.
Brexit is, and will remain, a lose-lose proposition. Neither side can claim victory if its point of view prevails. But the losses on both sides can be reduced. To that end, keeping the UK in the EU customs union – by guaranteeing it an active, albeit informal role – is negotiators’ best bet. |
How to increase red blood cell count. a lack of iron in the diet and, perhaps, other minerals and nutrients is the most common cause of a low red blood cell count. eating foods rich in 5 ingredients may help to increase your red blood cell…. A complete blood count (cbc) measures the concentration of white blood cells, red blood cells, and platelets in the blood and aids in the diagnosis of conditions and diseases such as anemia, malignancies, and immune disorders.. The popular story of how low-carb diets work goes something like this: reducing your carbohydrate intake lowers your insulin levels. since insulin keeps fat locked into adipose tissue, lowering insulin can increase the amount of fat released to be burned for energy. for the portion of the overweight/obese population.
Since white blood cell count is a sign of systemic inflammation, it’s no surprise that those with lower white counts live longer.. A complete blood count is often part of a routine exam and is used to measure different parts and features of your blood. learn more about this test.. A complete blood count (cbc) is a blood test used to evaluate your overall health and detect a wide range of disorders, including anemia, infection and leukemia. a complete blood count test measures several components and features of your blood, including: red blood cells, which carry oxygen white.
How to spot low white blood cell count symptoms. learn what signs to watch for that indicate someone’s white blood cell count is in the danger zone. normal white blood cell counts range from 4,500 to 11,000 wbcs per cubic millimeter of blood.. An rbc count is a blood test that’s used to find out how many red blood cells (rbcs) you have. the test is usually part of a complete blood count (cbc) test that measures all the components in your blood. we’ll explain why your doctor might order one, how it’s performed, and what normal and abnormal results mean.. Do you tend to bruise easily and have trouble stoping cuts or wounds from bleeding? or perhaps frequently get nosebleeds or bloody gums? if so, there’s a chance you have a low platelet count. having a low platelet count — a condition called “thrombocytopenia” — is a problem with normal blood clotting and bruising that results from having low levels of thrombocytes, colorless blood.
An rbc count is a blood test that’s used to find out how many red blood cells (rbcs) you have. the test is usually part of a complete blood count (cbc) test that measures all the components in your blood. we’ll explain why your doctor might order one, how it’s performed, and what normal and abnormal results mean.. A complete blood count (cbc) measures the concentration of white blood cells, red blood cells, and platelets in the blood and aids in the diagnosis of conditions and diseases such as anemia, malignancies, and immune disorders.. A complete blood count (cbc) is a blood test used to evaluate your overall health and detect a wide range of disorders, including anemia, infection and leukemia. a complete blood count test measures several components and features of your blood, including: red blood cells, which carry oxygen white. |
Cardiologists are physicians who practice in the subspecialty of internal medicine that concentrates on the diagnosis and treatment of heart disease. In most instances, cardiologists treat patients on a consultative basis to determine if the symptoms the patients are exhibiting are signs of heart disease. According to the State Health Facts Web site of the Kaiser Family Foundation, there are 29,649 actively practicing cardiologists in the United States.
Best Geographical Location(s) Opportunities exist in all regions with most jobs in larger urban and suburban centers near medical centers and hospitals. New York, California, Massachusetts, Florida, Pennsylvania, and Michigan represent the top states for employment of cardiologists. |
Electronic components are the basic components of an electronic system, and are basic units of circuits that can perform predetermined functions and cannot be divided. Due to the large number and variety of electronic components, their performance, reliability and other parameters have a great impact on the technical performance, reliability, life cycle and other technical indicators of the entire military electronic products. Therefore, the correct and effective selection and use of electronic components is an important task to improve the reliability level of military products. At present, the world is undergoing a new military revolution. Informatization is the essence and core of this new military revolution. The necessary conditions for realizing the informatization of military equipment are high-level and highly reliable military electronic components. Electronic components, especially microelectronic devices, are increasingly used in military equipment, and the selection and application of electronic components is increasingly important.
This article focuses on the procurement, screening, destructive physical analysis and failure analysis in the selection and use of military electronic components, and lists the selection and use criteria of components and the whole process flow chart. Electronic components are the basic components of an electronic system, and are basic units of circuits that can perform predetermined functions and cannot be divided. Due to the large number and variety of electronic components, their performance, reliability and other parameters have a great impact on the technical performance, reliability, life cycle and other technical indicators of the entire military electronic products. Therefore, the correct and effective selection and use of electronic components is an important task to improve the reliability level of military products. The reliability of electronic components is divided into inherent reliability and reliability. The inherent reliability is mainly guaranteed by design and manufacturing work. This is the task of component manufacturers. However, the failure analysis data at home and abroad shows that nearly half of the component failures are not due to the inherent reliability of the components, but to the user's improper selection or incorrect use of components.
Therefore, in order to ensure the reliability of military electronic products, it is necessary to strictly control the selection and application of electronic components. 1, the classification of electronic components As the name suggests, components can be divided into two major categories of components and devices. There are resistors, capacitors, inductors, relays and switches in the components; devices can be divided into semiconductor discrete devices, integrated circuits and electric vacuum devices. Table 1 is the component classification table.
2, the quality level of electronic components The quality level of components refers to the quality control level of the manufacturing, inspection and screening process according to the product implementation standards or the technical agreement between the supplier and the buyer before the component is installed. The higher the quality level, the higher the reliability level. In order to ensure the quality of military components, China has established a series of component standards. In the early 1980s, the "seven specialties" 8406 technical conditions (hereinafter collectively referred to as "seven specialties"), the "seven specialties" technical conditions were established. The basis of China's military component standards, the components currently controlled according to the "seven specialties" or its tightening conditions are still the main varieties used by aerospace and other departments. (Note: "Seven specialties" refers to special personnel, special planes, special materials, special approval, special inspection, special skills, special cards). According to the development trend, the "seven special" conditions will gradually transition to the national military standard (GJB) of components. The military standardization organization of China has established the national military standard GJB system with reference to the US military standard (MIL) system.
(1) The technical performance of components should meet the product requirements. Environmental adaptability should meet the requirements of military products, generally a 55~C~125~C. (2) The quality level of components should meet the requirements of the product.
(3) Consider the requirements for derating.
(4) It is preferred to use mature, stable, reliable, promising and continuously available standard components.
(5) The domestically produced components are preferred, especially the components on the military qualified product catalogue and the components produced by the IS09001 certified component manufacturers.
(2) Selecting components that have been tested and meet the requirements and can be stably supplied by the “seven special” fixed-point manufacturers.
(4) Avoid using the “broken” military products that have stopped production in foreign countries. The US military microelectronics device is broken, which means that a large number of US military microelectronic devices are no longer produced or will no longer be produced. The United States gives a special definition: "Diminishing Manufacturing Sources and Material Shortages", referred to as For DMSMS.
3.4 Prepare the component preferred catalogue as the overall unit of the model. The component catalogue should be developed, and the component types and specifications selected for the model should be compressed according to the preferred catalog. The quality grade of the components should be controlled to better ensure the components. The quality of supply is more conducive to integrated logistics support.
(6) After review, it is released by the general unit.
3.4.2 Dynamic management is based on the change of the components used in the product development and production stage, the changes of the components manufacturer's products and their quality status, and the information feedback during the use of the components. , the revision of the preferred directory should be carried out.
4. Selection and use of military electronic components The correct use of components has become an important issue affecting the reliability of military electronic components, equipment and systems, and should be highly valued by users.
4.1 Selection and use of components The whole process of process flow components use selection, procurement, supervision, acceptance, screening (copying machine), destructive physical analysis (DPA), storage, use, electrical assembly, power commissioning, static electricity Protection and failure analysis, etc. The whole process flow chart of component selection and use is shown in Figure 1.
4.2 Purchasing of components As shown in Figure 1, the procurement process is an important part of ensuring that components can meet the design requirements. Therefore, the units of each system, subsystem and equipment should pay attention to the following points: (1) The manufacturer shall prepare the technical standards for the purchased components and the re-inspection specifications for the incoming products, and the standards shall be consistent with the current valid drawings.
(2) The contractor shall prepare a purchase list of components, including the name, model, specification, accuracy and quantity of the components; the quality grade of the components, the use standards and the manufacturer; the package form, installation form and use of the components Environment; packaging and shipping requirements for components.
(3) The contractor shall pay attention to the catalogue of qualified sub-contractors when purchasing components, and the corresponding approval procedures shall be carried out when more procurement is required.
(4) How to purchase the specified quality level in actual procurement is the key to ensuring high reliability of the product. The components of the actual production model (including the front and rear suffixes), the quality grade, and the package form are completely purchased directly with the components of the drawing design. For some components with changes in the front and back suffix and unclear quality grades, the production execution standards should be determined first when purchasing. In the standard of components, the quality control standards of components in the manufacturing, inspection and screening process are generally specified. Products produced and managed according to different control standards have different quality levels. Accordingly, the product production execution standard is the main basis for dividing the quality level. If the production standard meets the production standard of the specified quality level, the component is considered to have reached the specified quality level. Secondly, the quality coefficient of the component should be clarified. The failure rate of the component can be calculated by its mass coefficient. As long as the failure rate satisfies the assigned value of the component, the component is considered to have reached the specified quality level.
4.3 Secondary screening of components Secondary screening of components is an important means of ensuring quality and reliability. Generally speaking, military products require 100% of the secondary screening of components, including 883B and "seven special". In Figure 1, when the contracting unit purchases the components back into the factory, it should follow the “Secondary Screening Specification for Model Components” to perform the secondary screening of the component design and the whole process flow chart. When the component secondary screening failure rate exceeds the specified ratio, the batch components are not allowed to be installed. In the secondary screening process, for components that do not have the screening conditions (such as large-scale 1:3 integrated circuits), the copying machine can be used (that is, the components without the screening conditions are assembled into the circuit board to perform temperature stress. Or the method of electrical stress for a long time test).
4.5 Failure Analysis of Components During the commissioning and component environmental stress screening process, failures of critical and important components are found or components that fail multiple times during use without finding the cause should be analyzed for failure of components. Failure analysis of components is carried out by anatomical analysis of the failed components, and physical and chemical techniques are used to find out the failure mechanism, and improvement methods are proposed to improve the reliability of the components. Failure analysis includes failure investigation, failure mode identification, failure characterization, failure mechanism verification, and proposed corrective actions. There are more than 20 failure analysis of integrated circuits, from external analysis to internal analysis, from non-destructive analysis to destructive analysis, and then using optical, chemical, mechanical and electronic techniques for analysis.
(3) The resistor RJ1~Pd7 series cannot be used because of its custom magnets tube It is hollow and easy to crack under vibration conditions. It can be replaced by RJK24~RJK26 or t/J13 and RJ14.
(5) Relay should be used Metal package, sealed type, parallel derating is not allowed.
(1) Deep people master the technical performance of the components used and strictly control the use of new devices.
(2) Deliberately reduce the working stress (electrical, thermal, mechanical stress) of the component so that the actual applied stress is lower than its specified rated stress. Derating design can refer to GJB/Z35 "Component Derating Guidelines".
(3) In order to prevent thermal failure of electronic components, effective thermal design and environmental protection design must be adopted during the layout and installation of components.
(4) In order to solve the device failure caused by static electricity, anti-static measures must be taken in the design and use of the device. If the protection network of the anti-static damage is added to the input end of the device, the operation site must take anti-static measures. (5) Pay attention to the correct use of instruments and meters during the debugging process. For example, the instrument should be grounded normally. (6) Properly store and store components. Such as maintaining appropriate temperature and humidity; preventing the presence of harmful gases. |
Main UI Design with Adobe Illustrator: Discover the ease and power of using Illustrator to design Web sites..
Create high fidelity prototypes for complex websites and applications with the easy-to-learn and super-efficient vector capabilities of Illustrator and make the fear of client changes a thing of the past. Whether you’re a seasoned Photoshop veteran, a budding designer, or someone who simply has a good eye and artistic vision, this book will show you how to produce mockups and UI elements in a creative and productive way.
Strongly of the opinion that design should not happen solely in a browser, Rick Moore demonstrates how to design mockups and UI elements with Illustrator in a way you may not have realized was possible. Learn which tools are best suited to a UI design workflow and how to customize Illustrator in a way that fits your style and flow. Rick provides expert guidance throughout the process from the initial planning stages to finalizing and sharing your work with clients and others.
way that felt as fast as it was creative.
I.A My very first client website courtesy of the Wayback Machine (www.archive.org/web/web.php).
There used to be images there…really.
but mostly as a supplement to Photoshop.
Photoshop to finish the job.
with some pretty spiffy pixel-related features.
workflow to this tool without looking back.
without ever hampering my creativity.
that needs help realizing their artistic vision.
the screen—no matter the size—and understanding the code helps you do just that.
I.B Photoshop and Illustrator look and feel very similar. That helps a lot when it comes to learning a new tool.
InDesign, many of the concepts in Illustrator will seem very familiar. That’s by design.
By the way, I am a keyboard-shortcut junkie.
you will have gotten your money’s worth.
let you decide which works best for you.
stage of the mockup, as well as other helpful examples, by going to www.peachpit.
as hard to choose a mockup tool as it is to pick a computer platform.
and shapes on the screen to create graphics.
these same graphics with individual pixels.
Vectors are inherently faster because computer processors can execute math instructions a lot faster than they can draw pixels.
The benefits for Illustrator users are many.
you to store more documents on your drive.
aspect of what you’re creating. Let fear of client changes be a thing of the past.
is susceptible to scream-inducing crashes.
most. Murphy’s Law, I guess.
resized (shown here at 300% zoom).
1.4 This screen design uses color and typography to create a visual hierarchy.
out of your head and onto the screen.
the recent addition of anti-aliasing options.
1.5 A bevy of typography tools gives users the power to create professional results.
perfect, but it gets the job done.
to help it look better onscreen.
1.7 Get the right color for your project with Illustrator’s extensive color tools.
simulate browser floats in your design.
tool, Kuler, right inside Illustrator.
the tools you need to realize your vision.
1.9 You can save color swatches in groups to create different color schemes.
time and effort to achieve consistency.
applied to other objects in one click.
1.11 A symbol is an object that can be reused multiple times.
automatically updated to reflect the changes.
contains everything you need to get started.
1.12 Creating grid systems is really easy with Illustrator.
mean you don’t have to eyeball where an element lines up with the grid or another object.
because, as they say, the devil is in the details.
that Illustrator provides for UI design. Let’s begin by jumping right in to the tools.
have a relatively steep learning curve.
lower-right corner of the button (2.2).
create a floating tool panel (2.4).
one of the hidden tools exclusively.
put the flyout away (2.5).
designed. That’s ingenuity right there.
2.5 Click the Close button to put the tool flyout away.
the shift key as you select.
unselected points are hollow (2.8).
Add to or remove points from the selection by holding the shift key as you click.
segment and hope that it’s the right one.
2.8 Selected points are solid; unselected points are hollow.
objects in groups are locked together as one.
Click again to select the entire group.
number selects a wider range of the chosen attributes.
tools and see what you can find.
ment to select it (2.10).
key as you drag around unselected points.
you drag around selected points.
illustration, and now website and UI design.
Tools panel are probably the drawing tools.
excellent book, Vector Basic Training (new riders Press).
time to learn how to use it.
2.12 Illustrator was built to create illustrations like this.
1. select the Pen tool and click to set a point.
connected with a line segment (2.13).
2.14 Click to close the path.
next, you’ll draw an ellipse.
Illustrator creates a Bézier curve point.
to alter the shape of the curve.
to click the starting point to close the path.
2.15 Click and drag the mouse to create Bézier curves.
Delete Anchor Point tool (2.18).
by holding the option/Alt key and clicking the point (2.19).
Illustrator has several useful tools for creating basic shapes (also known as primitives).
toolbars, menus, and the like.
shortcuts to aid in drawing perfect shapes.
Use the shift key while drawing to constrain each shape to a square or circle.
the shape from the center.
to reposition the shape on the artboard.
1. Click to set the starting point.
2. Drag the mouse to draw the segment.
angle to 0 degrees as you draw.
a different location on the artboard.
a star and sides from a polygon.
star is always drawn from the center point.
and press the Up Arrow key several times.
This will add points to the star (2.22).
radius of the star (2.23).
many of the typographical features in Illustrator work the same way.
2.24 Point type is free of any boxes or borders.
2.25 Area type stays within a user-specified boundary.
without having to rely on hard returns.
object on either its vertical or horizontal axis.
Copy to reflect a copy of the original object.
Transform tool does much more.
of the functionality of all the other transformation tools.
Scale or Shear tools) to transform an object.
color tools in its Tools panel.
on the object with the gradient annotator.
annotator, appears on top of the object.
by clicking and dragging across the object.
near the diamond end of the gradient bar.
icon, click and drag to rotate (2.30).
from one object to another. You can doubleclick the tool to set selection preferences.
whose attributes you want to copy.
identical by copying all appearance attributes, including any effects.
clicking it or toggle between them by pressing X. The active attribute sits on top (2.35).
and want to start from scratch.
quick way to apply the active color or gradient, or remove the fill from a selected object.
what you will get when you click.
fit all objects on the selected artboard.
can press shift-D to toggle between modes.
ascending order as you draw.
2.39 Draw Inside mode places new objects inside a userspecified shape as you create them.
bottom of the stack in descending order.
corners indicate the mask (2.39).
menu bar, scroll bars, and panels.
your artwork window displays full screen.
the display of the tools and panels.
out of the way and focus on being creative.
panel to choose what information to show.
detail with input boxes and icons in full view.
other panels inside of it.
can be accessed via a link.
only one collapsed panel at a time.
by clicking the tab header.
between two other groups. Dragging anywhere else will combine the two groups.
as well, but the process works a little differently from that of collapsed panels.
space to the right of the tabs and dragging to the desired location (2.49).
Move a single panel by dragging its tab.
the tab bar of the desired group.
clicking the tab and dragging it left or right.
tab. Double-click the tab again to open it.
above the Control panel (2.50).
using the large display or the laptop display.
accessed from within other panels.
I set up a custom workspace that I find optimum for UI design (2.51).
workspace configuration that works for you.
efficiency that I cover throughout the book.
workspace, you can do that by choosing Manage Workspaces from the workspace switcher.
efficiency in the creation of UI screens.
chrome. The workspace is very flexible and can be customized to your liking.
more appropriate for screen graphics.
millimeters, or a combination of those.
including type, with pixel measurements.
The Preferences dialog box appears (3.1).
Stroke, and Type to Pixels.
often to count. If you want to give the Application frame a try, simply turn it on by choosing Window > Application Frame.
3.4 With the Application frame enabled, you can more easily focus on your task.
artboard itself will appear in the final design.
3.5 An example of an entire app mockup, including alternate layouts and test graphics, in one file.
design assets so much easier.
that the workspace has to offer.
1. Choose File > new (Cn/Ctrl+n).
choose a profile. Illustrator has several builtin profiles for you to use as a starting point.
2. Select the Web profile.
the width and height fields.
Advanced section to open it (3.7).
at how to use these effects in a later chapter), and preview mode for the document.
found to be very useful for UI design.
4. Change Preview Mode to Pixel.
used to working in Photoshop.
for the web or devices.
Note See page 80 for more on pixel precision.
Adobe\Adobe Illustrator CS6 Settings\[language]\new Document Profiles.
Open a new window from the Finder.
drag it to the sidebar in the Finder window.
the screen (3.8). This, my friend, is the artboard.
stays out of your printed or exported files.
to include them in the final artwork.
and remove it. It’s pretty versatile.
performance and a relatively small file size.
in the next few sections.
any artboard in the workspace.
order until you have only one left.
requires at least one artboard at all times.
and it will automatically scroll to reveal more space.
are all visible in artboard editing mode.
3.10 Click to place a new artboard.
that follows the cursor (3.10).
the new artboard in that spot.
this way to the right of the current artboard.
artboards, and delete empty ones.
where most of the elements are the same.
1. Select the Artboard tool (Shift-O).
rules outlined in the previous section.
a new artboard and create the new artboard at the same time.
of part of your design.
basis for the new artboard (3.13).
3.13 Click an existing object with the Artboard tool to create a new artboard based on its boundaries.
is, it isn’t really helpful as your project grows.
select the artboard you want to name.
3.14 Name your artboards for maximum efficiency.
3.16 …pages sometimes get removed, leaving holes in the page flow.
selected in the Control panel, don’t worry.
gets rearranged as well. Click OK.
them in their new positions.
to appear in the workspace.
to fit the whole artboard in the window.
To reset the zoom level to 100%, doubleclick the Zoom tool or press C1/Ctrl+1.
bottom-left corner of the document window.
if you named your artboards.
through a series of screens in succession.
the first and last artboards.
work to match your design.
Illustrator for basic UI design tasks like setting up grids and working with typography.
create this over the next few chapters.
to align several objects to a certain plane.
snap the guide to that object.
resolution you’ll use for this mockup.
View > rulers > Show rulers (Cr/Ctrl+r).
to move the guide, it may be locked.
draw a line on the artboard.
Guides > Make Guides (C5/Ctrl+5).
behavior, visibility, lock state, and color).
on your artboard to an angle without having to measure (4.2).
ready for you to work with.
you have more than one object selected.
Illustrator will align the selected objects.
on the axis or edge that the user has chosen.
This is very different from aligning objects.
objects from the edge or center you clicked.
(which can be user-specified) between them.
align to needs to stay in place.
them or by Shift-clicking each object.
4.5 The key object is indicated by a bold selection outline.
you can toggle the guides on or off by choosing View > Smart Guides (CU/Ctrl+U).
without having to pull guides from the ruler.
of objects with effects applied to them.
4.10 Measurement labels provide in-context information about the dimensions and position of objects.
precise with layout and placement in context.
with the rotate, Scale, or Shear tools (4.11).
a preset angle group or specify your own custom angles (see 4.6).
methods, including not using a grid at all.
grid by dragging guides from rulers and trying to place them consistently on the page.
the measurements were in whole numbers.
will do these tricky calculations automatically.
give you a clean slate from which to begin.
4.13 Center the rectangle on the artboard before you create the grid.
3. Choose Object > Path > Split Into Grid.
choose settings for this particular grid.
4.14 Create a 24-column grid with 10 pixel-wide gutters.
4.15 A standard 24-column grid.
guides manually afterwards. Click OK.
4.16 Select the top anchor points of the grid with the Direct Selection tool.
4.17 Move the anchor points to the edge of the artboard.
top of the grid (4.16).
the two horizontal gridlines by clicking on their respective line segments.
5. Press CS/Ctrl+S to save your work. |
How do you survive on half the income you once earned? For Karen Benge of Ontario, it’s all about eliminating small expenses.
A report “Making Ends Meet” by the California Budget Project released last week highlights challenges families face in meeting basic living expenses such as rent, food, child care and transportation.
It also calls for policymakers embroiled in budget debate to support public benefit programs that help working families.
More often than not, Benge, a chemical soap saleswoman who only a few years ago was making $70,000 a year, is forced to hit up local food banks to get her through the end of the month.
According to the study, an hourly wage families need to earn to sustain “a modest standard of living” is three times the state’s minimum wage.
A single parent living in San Bernardino County needs to earn at least $28 per hour to support a modest standard of living – $31 in Los Angeles County.
Everard has been looking for a job for the past two months without much success. Lately, she was forced to sleep in her car.
The CBP estimated that a basic family budget included necessities such as child care, food, transportation, housing and health care.
It also assumed that families were renters instead of owning their home, and that health coverage was purchased privately with no assistance from an employer.
The estimates did not include savings for retirement or college tuition.
In families where both parents work, each has to rake in $18 per hour or $75,000 per year.
Families where just one parent has a job are able to make do with less per year – $54,000 – but that meant one parent has to earn at least $25 per hour.
Vivian Saucedo, a mother of four, never had a job that paid more than $10 per hour. A recent split with her spouse meant she will have to rely on food stamps to feed her children.
Like Benge, she came to Inland Valley Hope Partners to pick up cereal and canned vegetables.
CBP acknowledged that many Californians support their families on less than the standards estimated in the report – they either get health coverage from their employer or leave their children with a family member while they work.
Others like Saucedo rely on public programs such as state-subsidized child care or Medi-Cal in order to make ends meet.
Assembly Democrats have proposed plugging the $19.1 billion deficit by imposing a tax on oil production, eliminating planned corporate tax breaks and borrowing from the state’s recycling program.
Senate Democrats want to raise taxes on vehicles, alcohol, corporations and income.
Gov. Arnold Schwarzenegger is calling for eliminating CalWorks, the state’s welfare program, and cutting millions from Medi-Cal; the state’s In-Home Supportive Services program for the elderly and disabled; and Healthy Families, the state’s children’s health insurance program.
Republicans are largely backing the governor’s proposal. |
Chemicals in ubiquitous Mediterranean plants hold key to delaying aging diseases, like Alzheimer's and Dementia.
The use of tablet computers is both a safe and a potentially effective approach to managing agitation among patients with dementia, a new pilot study suggests.
People who live close to high-traffic roadways face a higher risk of developing dementia than those who live further away, new research has found.
Artists could be diagnosed early.
Read full story Can Paint Strokes Help Identify Alzheimer's?
Our memory changes with age, so that we may have a memory slip on a trip to fetch something from the next room, but we’re still able to recall important events from history with great detail. But why?
Tests that measure the sense of smell may soon become common in neurologists’ offices.
Breakthrough findings demonstrate a possible target and potential drug treatment to restore memory loss and extend life span in mice with neurodegeneration.
In a 20-year Finnish study, men who used a sauna 4-7 times a week had a significantly lower chance of being diagnosed with dementia than those who only used it once a week. |
In the future, wastewater treatment plants can have a broader function by being converted into biorefineries.
All of the world's wastewater treatment plants treat large amounts of sewage, used today to produce biogas. In the future, treatment plants can have a broader function by being converted into biorefineries for the production of everything from biogas to different new materials, according to new research out of the University of Borås, Sweden.
Scientists in Resource Recovery at the university plan to validate a new concept in which they produce and extract fatty acids using membrane bioreactors, which in turn are used to produce the substances acetic acid and hydrogen.
Today, food waste and sludge from wastewater treatment plants are being used to produce biogas, but biogas competes with other energy types, such as wind power and solar energy.
"But with our technology, we can develop a platform so that the treatment plants can be transformed into refineries where different chemical substances can be extracted and used to produce different types of materials. Fatty acids are a kind of intermediate product," says Mohammad Taherzadeh, professor in Bioprocess Technology, who heads the project.
Fatty acids have a function similar to that of sugar in various petrochemical and biological processes, namely, as sustenance for the microbes used in the processes. The successful production and extraction of fatty acids allows for further processing of these substances to other products, such as bioplastics or butanol, say the researchers. The amount of sewage sludge remaining in the process can be used as substrate in a biorefinery.
"Another feature of the method is that carbon contained in the sludge can be extracted, and therefore there can be a circular process in which the carbon is used to remove nitrogen and phosphorus from the wastewater, and which we do not want to get into our waterways, as it leads to eutrophication. Today, treatment plants purchase large quantities of carbon for this process." |
RECENTLY, MODELISTICA HAS, been conducting an evaluation and feasibility study to determine the suitability of XML and Java for the representation and manipulation of Transport and Land Use (TLU) modeling information as used in urban and regional planning.
XML, the Extended Markup Language, is highly publicized as the replacement for HTML for describing document content on the Web. But markup languages have a long history and have applications far beyond those of the Web. They are currently being used for information description and exchange in such diverse areas as finance and trade, mathematics, chemistry, biology, knowledge representation, genealogy, software package description and distribution, CASE, graphics, and more.
HTML is the most popular and most widely known use of markup. XML was designed by the World Wide Web Consortium—often referred to as W3C—to enable the use of the Standard Generalized Markup Language (SGML) on the Web. XML is a public standard: it is not a proprietary development of any single company. The version 1.0 specification was accepted by the W3C as a formal Recommendation on Feb. 10, 1998. The XML Web page at the W3C site is the entry point to a sea of information about XML, SGML, and related technologies and applications.
XML is an abbreviated version of SGML—the international standard for defining the structure and content of electronic documents. XML eliminates the more complex and unused features of SGML making it much simpler to implement, but still compatible with its ancestor. XML is actually not a single language but a meta-language. XML can describe both the syntax of specific classes of documents, and their contents. The portion of XML that determines document syntax is named the Document Type Definition language (DTD). XML supports multiple DTDs.
From the XML perspective, HTML is just one of these document types—the one most frequently used on the Web. It defines a single, fixed type of document with markups that let you describe a common class of simple office-style reports. Because it provides only one way of describing information, HTML is overburdened with dozens of interesting but often incompatible inventions from different manufacturers. In contrast, XML allows the creation of markup languages customized to the needs of specific applications—which is what brought me to investigate the possibilities of defining a markup language for urban planning. If you're interested, a long list of current and under-development applications of SGML and XML can be found at OASIS' XML Web page.
My first experiments showed that XML files for our area of interest, Transport and Land Use (TLU) information would be large. One to four megabytes looks normal. So it was important that parsers had good performance in terms of both speed and memory usage. Defining an XML document type for TLU is among the project's long-term goals, so attention was also given to the ability to parse XML Document Type Definitions, and validate XML documents against it. I also considered implementation of current and upcoming XML standards.
All the parsers and tools reviewed here are available on Web. As of this writing, no commercial parsers were available and most parsers were flagged with some label indicating the publisher wasn't claiming the parsers were production-quality. These products (with the exception of Microsoft's original parser) are freely available for download. They are releases in all but name.
The XML standard classifies documents into one of three categories: not well formed, well formed but invalid, and valid. A document is well formed when it meets all the syntactic and semantic requirements described in the XML standard. A well formed XML document is also valid when no Document Type Definition (DTD) is provided. When a DTD is provided, a valid document must also comply with the grammar described by the DTD.
Furthermore, documents can be stand-alone or can have references to external information, and the XML standard allows for special treatment of external definitions by non validating parsers.
I used James Clark's XMLTest test suite to evaluate how well the parsers conformed to the XML definition. The XMLTest suite is composed of several hundred small XML files and DTDs, each one testing for conformance with a specific aspect of the XML standard. The tests range from simple checks to highly contrived entity definitions and expansions. The test suite also includes normalized versions of all the valid files so that they can be compared with the output of the targeted parsers.
For validating parsers, I added yet another test. I introduced a simple but obvious error in the first lines of one of the large files I used in the performance test. The name of one of the elements was changed to one that did not appear in the DTD and was, hence, invalid. This was a trivial test and only one of the validating parsers failed it.
XML namespaces provide a simple method for qualifying names used in XML documents by associating them with namespaces identified by URI. Namespaces are intended to avoid problems of recognition and collision in documents with fragments of different types. An example is the case of a small database described in XML Data, embedded in an HTML document.
The XML Linking Language (XLink) consists of constructs that may be inserted into XML documents to describe links between objects. XLink can describe the simple unidirectional hyperlinks of today's HTML as well as more sophisticated multi-ended and typed links. The XML Pointer Language (XPointer) allows hyperlinks that reference arbitrary document fragments.
The Namespaces, XLink, and X- Pointer specifications are currently at the "working draft" level, so they were not included in the evaluation. These technologies are important, so you'll find mention of the parsers that implement the current draft versions of the standards.
The Document Object Model (DOM) is a language-neutral API that allows programs to dynamically access and update the content, structure and style of documents. The DOM Level 1 Specification is already a publicly available W3C Recommendation.
To evaluate the DOM compliance of the libraries, I wrote a small program to test the interfaces defined in the DOM Java binding as they appear in the org.w3c.dom package (see Listing 1.) The program was run using each library in turn.
The Simple API for XML (SAX), is a standard interface for event-based XML parsing, developed collaboratively by the members of the XML-DEV mailing list (see the Microstar Web site). A SAX-compliant XML parser reports parsing events to the application through callbacks, without necessarily building any internal structures. The application implements handlers to deal with the different events, much like it's done by modern graphical user interfaces (GUIs), like Java AWT.
The SAX API makes the parser layer totally independent from other application or library functionality. A particular set of event handlers may be used to build an in-memory representation of an XML document, while a different set of handlers may render the document on the fly. Java packages that implement a SAX driver are in fact interchangeable, at least in theory.
Speed and memory usage tests using two large XML files were performed (0.8 and 1.2 MB, respectively) by one of our in-house applications. Each file contains several thousand XML elements nested in a four-level deep hierarchy, and all of the elements have one or more attributes.
For each parser and file, three runs were performed: one without validation, one providing the DTD and enabling validation, and a third run using the same scheme as in the second but introducing a validity error in the first 10 lines of the XML file. The DTD is the same for both files. It consists of 530 lines and uses DTD entity definitions (sort of a DTD macro) moderately.
A separate test was performed on the parsers that provide an Object Model to measure model navigation speed, and memory use. The test consisted of loading a large XML file and querying the object model while constructing yet another application-specific structure. To force navigation of the complete structure, the model's Document object (or its equivalent) was used to write a new XML file out to disk. Because this was a performance and not a compatibility test, changes were made to the test program so it would run with those parsers that didn't implement the DOM Level 1 standard, implemented it incompletely, or had their own proprietary object model. Libraries that would have required non-trivial changes were not tested. This test was performed with validation turned off to minimize parser overhead and focus on object model navigation.
All tests where done using SUN's Java Runtime Environment (JRE) version 1.1.7A on an 300 MHz Intel Pentium II with 128 MB of RAM running Windows 98. The maximum heap space for all tests was set to 64 MB. The programs were compiled using the same version of the SUN javac compiler, with optimizations turned on. Times were measured using an external command line program called from a batch file, so they include the time needed to load the Java VM and any required libraries. Memory usage was obtained by examining the trace of the programs after running them with verbose garbage collection turned on. Note that these tests were not devised as benchmarks that would help determine the split-second fastest parser, nor byte consumption per XML element. They were designed with the intention of exposing problems in parser design that had an obvious impact in performance when working with large XML documents.
The DOM SDK, by Docuverse, is not an XML parser, but a DOM implementation that works on top of any parser that exposes a SAX interface. It is discussed here because the very first tests already showed that it is indeed very simple to combine the DOM SDK with different parsers. Performing the DOM test using the SDK with both the Ælfred and XP parsers shows that these combinations are serious competitors to integrated parsers like Sun's or IBM's.
The DOM SDK is available at the Docuverse DOM SDK page . The license allows free distribution of the binaries (.class and .jar files) but is very restrictive about copying or modification of the source code and documentation.
XP is a "high performance" XML parser produced by James Clark, who was technical lead for the W3C SGML activity group. This group produced the first draft of the new XML standard. XP is non-validating, but it checks if documents are well-formed, and are capable of parsing external entities including DTDs. The only interface XP provides for applications is a SAX driver, so it qualifies as a lightweight parser. The documentation provided with the parser consists only of the output from JavaDoc. The documentation was too succinct at times and assumes familiarity with SAX.
XP performed in the top tier, along with the parsers from Microstar, IBM, and Microsoft. XP performed well also when combined with the DOM SDK using the DOM test suite. Under this test suite, XP and Ælfred, another lightweight parser, produced almost equivalent results. It is expected these two parsers will evolve in different directions in the near future. James Clark's XP emphasizes conformance, and will probably evolve into a validating parser, while AElfred emphasizes efficiency, portability, and fault tolerance, and will probably evolve in that direction without adding the complexity of new features. XP performed well under the XML conformance test, which is not surprising. After all, James Clark himself devised the test suite. The XP parser is free and is available at James Clark's Web site.
Ælfred is a parser that concentrates on optimizing speed and size rather than error reporting. This approach is the most useful for deployment over the Internet. Ælfred consists of only two core class files, the main parser class (XmlParser.class) and a small interface for your own program to implement (XmlProcessor.class). All other classes in the distribution are either optional or demonstrations. At 31 K, Ælfred's JAR file was, by far, the smallest among all the parsers.
Ælfred uses only JDK 1.0.2 features, but testing showed that it runs fine with JDK 1.1.6, 1.1.7A, and 1.2rc1. The documentation claims that the parser is compatible with most character encodings available on the Internet, but no attempt was made to test that assertion.
This parser was designed to be very lightweight, very portable, and very fault tolerant. It will produce correct output for well-formed and valid documents, but it won't necessarily reject every document that is not valid or not well formed. Ælfred will probably never become a validating parser.
Ælfred comes with very complete API documentation in the form of HTML files generated by JavaDoc 1.1. Several simple example projects are also included. This parser was fast in the tests that didn't involve validation, and was able to complete the DOM test when combined with the Docuverse DOM SDK. Ælfred and XP performed almost equally.
The conformance test showed that Ælfred is not as fault tolerant as the documentation suggests. Ælfred generated exceptions for valid documents that were not stand-alone, and went into an endless loop of error reporting for some of them. Ælfred failed to report many documents that weren't well formed.
Ælfred is free for both commercial and non-commercial use and redistribution. The only requirement is that Microstar's copyrights are preserved in derivative source code, and that any modifications are clearly documented. Ælfred can be downloaded from Microstar's site.
MSXML is a validating XML parser produced by Microsoft as part of its Internet Explorer 5 effort. The parser has support for namespaces and is compliant with the XML draft specification of November 1997. The parser provides its own Object Model, which is quite powerful but isn't DOM Level 1-compliant. MSXML does not provide a SAX driver, but drivers are available elsewhere—check out Lars Marius Garshol's Free XML Software page and the Microstar Web site.
MSXML's documentation consists of several sample projects and JavaDoc documentation for the API. The API documentation is nicely laid out, but many of the methods are undocumented in this version. The sample projects include some interesting ones like an XML viewer applet. Another set of applets can take small databases described in XML Data and lay them out nicely using tables and dynamic HTML. Some of the applets even allow for the edition of the XML Data information, from changing field values, to adding and deleting records.
MSXML was the top performer in terms of both speed, and memory usage. The parser performed better than the small SAX driven parsers in all tests, despite the fact that MSXML always builds an in-memory model of the document and validation was always turned on. In the DOM test, MSXML consumed only half the memory of its closest rival. All this performance fits in a JAR file of just 101 K, which gives the parser the smallest footprint among those that provide an object model. Also note that MSXML's performance is provided through 100% Pure Java code. Whatever the secret is to MSXML's performance, other parsers would do well imitating it.
The DOM test had to be adapted to be run with MSXML. The algorithm remained the same, but many declarations and method calls had to be changed. MSXML performed quite well on this test. It's speed and memory performance was better than that of any of the other parsers. The API is not DOM-compliant, but it is as expressive as DOM, so it shouldn't be difficult to make MSXML DOM Level 1 compatible.
On the conformance test, MSXML gave incorrect warnings and errors about many valid documents. The parser also failed to detect many of the documents that were not well formed or invalid. MSXML does not provide a SAX driver, but drivers are available on the Web (as mentioned previously).
MSXML originally didn't work with SUN's JDK 1.1.6 or 1.1.7A, because two locations in the library's initialization code assumed that the JDK version would be convertible to a float value. The Integrated Development Environment (IDE) used to construct the tests suites promptly pointed me to the faulty lines, so I fixed them. Oddly, MSXML reported an invalid document with JDK 1.2 on a test that ran to completion with JDK 1.1.7A.
Microsoft entered into an agreement with Data Channel for further development of the parser. At this writing, MSXML had been removed from the Microsoft Web site. Unfortunately, the current beta of the parser provided by Data Channel is evaluated below MSXML in all regards. Fortunately, the license Microsoft provided with its version 1.9 parser is liberal enough that you'll likely be able to find copies of the original, or of its heirs elsewhere.
The XML parser from Data Channel (DCXML) is derived from Microsoft's. Surprisingly, the package layout and the methods available in the DC parser are very different from those in MSXML. DCXML performed well below most other parsers in all tests for speed and memory use. Even though DCXML is on its first beta, no differences with the base code layout and performance were expected.
The documentation provided with DCXML consists of the output of JavaDoc over a set of Java files with absolutely no JavaDoc comments. As such, the documentation is useful for browsing through the source code and little more.
DCXML performed well below the other parsers in all tests in terms of speed, but it was able to complete tests that Sun XML couldn't when the Sun parser ran out of memory.
Object model tests on DCXML were not performed because it lacked the equivalent of the DOM method getElementsByTagName(). I could perform that test on MSXML because it provides the same functionality through an Element.getChildren().item() method.
In the conformance test, DCXML failed to recognize about 15% of valid documents, generating null pointer exceptions for several of them. DCXML had only a few problems with documents that were not well formed. Most of the errors occurred in documents that had references to external entities.
The licensing policy for DCXML is currently unknown. The license that was bundled with the downloaded parser is an exact copy of the liberal one that came with MSXML 1.9. A different version of the license in Data Channel's Web site states that the parser is free for commercial use as long as some value added is provided. Yet another version of the licensing policy was received via email, stating that the parser was free only for non-commercial use. This parser is in an early beta state, and its characteristics and the related policies may change considerably by the time it's released.
The Sun XML Library consists of a fast parser with optional validation. It has a SAX interface and the library provides an object model that is DOM Level 1 compliant. Sun's XML Library is labeled "Early Access 1", which means it's still under construction.
The parser's API documentation was generated by JavaDoc 1.2 and it's very complete. SUN also provides several sample programs that highlight library features such as DOM, namespace support, and JavaBean support. The set of sample programs serve well as a tutorial about the libraries' capabilities.
As in other libraries built around the SAX API, the parser and the object model are completely independent. SAX compatibility enables you to use the Sun parser core with other applications, including other DOM implementations like Docuverse's DOM SDK. The class in charge of building the in-memory object model, the DocumentBuilder class, implement's the SAX DocumentHandler interface, which enables the use of Sun's object model with other SAX-compliant parsers, like XP.
Sun's parser performed quite well in the tests that did not involve validation. On the tests where we included a DTD to provide validation, times were comparable to those of the fastest parsers but memory consumption skyrocketed. With validation enabled, the parser failed with an "out of memory" exception and was not able to complete the test with the 1.2 MB XML file. The test that involved a file with an invalid element on the first few lines consumed as much memory as when the file was parsed entirely.
The parser also performed very poorly on the DOM navigation test, taking more than 25 minutes to complete. The results made Sun's DOM implementation the worst performer. On the conformance test, Sun XML failed to recognize just one of the valid documents.
XML4J is a validating XML parser produced by the IBM alphaWorks project. The parser is compliant with the XML 1.0 standard, it has support for namespaces and DTD manipulation and it implements DOM Level 1. XML4J also provides a SAX driver. XLink/XPointer support are also provided but were not tested.
The documentation for XML4J consists of a tutorial, complete API documentation in HTML and several sample projects. The tutorial covers all relevant features of the library with special attention to the libraries unique features. The API documentation was generated by IBM's own implementation of JavaDoc. The API descriptions are very complete and the HTML layout has a quality comparable to that of documentation generated by JavaDoc 1.2.
One of XML4J's unique features is the possibility of setting up event handlers and filters at the object model level. This feature enables you to build an object model out of only selected parts of a complex XML document. But XML4J's functionality comes at a price. The JAR file that contains IBM's library is 460 K, which is four times the size of the libraries from Sun and Microsoft, and more than twice the size of the libraries for DCXML and XP.
XML4J was among the top performers in the lot, falling only behind the MSXML parser overall, and behind the lightweight parsers only in the tests that did not involve validation. IBM's parser worked fine on the DOM test. Memory use was elevated when compared to that of MSXML and the lightweight parsers on the simple tests but was well below that of other parsers in the validation and DOM tests. Unlike Sun's library which behaved like the lightweight parsers when no object model was requested, IBM's parser consumed the same amount of memory when used as a SAX driver and when a DOM model was requested.
This parser went through the XML conformance test with very few problems. It failed to recognize only three documents as not being well formed. XML4J is distributed with full source code and a free commercial license. The parser is available for download at IBM's alphaWorks' Web site.
SXP is an XML library produced by Loria in France, as part of their ambitious XSilfide project. SXP is implemented as a SAX driver and has support for DOM Level 1, namespaces, XLink, and XPointer. The documentation that comes with the library consists of JavaDoc generated HTML pages, but the comments are few and succinct. The documentation was unhelpful in trying to discover the unique characteristics of the library.
SXP's performance was poor in terms both of speed and memory use. SXP was notably slow in the tests that involved validation, taking at least 15 minutes to complete any of them. But poor performance is to be expected of a parser in beta-optimizations are best left until the end of the development process. On the plus side, the DOM test ran unaltered with the SXP library.
On the conformance test, SXP failed to recognize more than 20 documents as valid, and it failed to detect the errors in a like number of documents that were not well formed or were invalid. SXP is free for academic, research, and non-commercial use. The library is available at Loria's Web site.
That said, nothing can be concluded about their current performance or about their limitations, if any. As an example, when evaluations began, IBM's XML4J at version 1.0.4 was one of the worst performers in terms of speed, memory use, and correctness. Things changed considerably with the 1.1.4 release and was one of the best parsers we reviewed. These parsers will certainly continue to improve or "die" and do so at breakneck speed. Our intention was to help you evaluate the current crop of parsers.
As the evaluation results show, different parsers excel under different requirements. The lightweight Ælfred from Microstar is the obvious choice when deploying simple applets on the Web. When XML document validation is required IBM's XML4J and Microsoft's MSXML are probably the right choices, with excellent standards compatibility being in favor of the former, and outstanding speed being the latter's claim to fame. For applications like the one we're working on, DOM Level 1 compliance is of primary concern because good compliance with that standard makes the parsers almost plug-and-play, and hence completely replaceable.
Our evaluation made it clear that combining XML and Java is definitely viable, that performance in this combination does not have to be an issue, and that conformance with the standard is rapidly improving.
There are several possible categories. The first one is that of very lightweight and very forgiving parsers like Ælfred, for use on the Web. The second is that of welterweight parsers, that implement most core standards, and provide a reasonable balance between performance, size, and features (I expect Sun's parser to evolve in this direction). Then there's that of heavyweight parsers which implement all relevant standards, are very strict about validation, and provide a wealth of additional features, at the expense of some speed and higher memory requirements. I suspect the lightweight and welterweight parsers will converge on an ideal feature set. The heavyweights will remain and a totally different category will result from the integration of parsers with other types of programs, like Web clients and servers. All the lights are green on XML. |
Psychopaths DO Have Empathy Switch After All?
Empathy in the mind of the psychopath is studied.
You know how there are a lot of elderly couples out there who look a lot like each other. A lot of them are because they're cousins, which I suspect was more okay back in olden times when you had to stock the farm with as many people as you could and it didn't really matter how divergent their genetic lines were. There are also plenty of elderly married couples that look alike who aren't related in any way, and it's these couples who have long puzzled researchers. A 1987 study that sought to get to the bottom of this was recently sent to me by Stuff They Don't Want You to Know's Ben Bowlin, who comes up with good stuff pretty much all the time.
There's long been a line of thought that the explanation for the altogether odd phenomenon of contagious yawning -- feeling the overwhelming urge to yawn after observing another person yawn -- is found in the empathy of the individual observing the yawn. The idea goes that the more empathetic among us are the most susceptible to contagious yawning and research shows this hypothesis tends to hold up among humans and higher apes.
Mirror Neurons: Are there people who feel others' pain?
People with a condition known as mirror-touch synesthesia literally feel the pain of others -- but why? Josh and Chuck trace the cause of this condition to one culprit: the mirror neuron. Tune in to learn more about mirror neurons and neuroscience.
Music has a real effect on us. Why, I'm listening to music right now (Devo at the moment) and it'll probably help shape this post. Case in point: There's a new study out in the journal Personality and Social Psychology Bulletin that covers how songs with prosocial lyrics have a prosocial impact on its listeners. Take, for instance, "Do They Know It's Christmas?" Remember Band Aid? The all-star group recorded that song 25 years ago to raise money for famine-stricken African nations. And it worked; the single raised over 8 million pounds. That's pretty prosocial.
Yawning is contagious, but why? Check out the leading theories on contagious yawning and empathy in this HowStuffWorks podcast. |
Background: Using methamphetamine and its dependence is a serious public health problem worldwide. In Iran about 50% of hospital beds are occupied due to psychosis or mental disorder complications related to methamphetamine dependence, which seriously affects patients’ admission to psychiatric hospitals.
Objectives: The current study aimed to evaluate the effectiveness of modafinil for treating patients with amphetamine dependence.
Patients and Methods: In the current clinical trial study, 50 male patients with amphetamine and methamphetamine dependence, who had referred to addiction treatment clinic of Baharan psychiatry hospital in Zahedan, Eastern Iran, were studied. The participants were followed for 12 weeks. The random sampling method was used and patients were divided to two groups of modafinil receivers and placebo, based on blocks permutation. To evaluate the consumption of amphetamine/methamphetamine, urinary screening for methamphetamine was conducted in the beginning of the study and every week during the study period. The drug craving and level of dependence were measured by Visual Analogue Scale of Craving (VAS) and Addiction Severity Index (ASI), respectively. At the end of the follow-up period, data were analyzed using t-test and Chi-square test by SPSS ver. 18.
Results: The mean age of the subjects was 29.5 ± 6.4 years. The results of urinary screening for methamphetamine were positive for 52.8% and 55.1% of the subjects in the modafinil receivers and placebo groups, respectively. The mean scores of drug craving were 76.2 ± 9.0 and 81.0 ± 8.2 for the modafinil receivers and placebo groups, respectively (P = 0.064). The mean of reduction in dependence level scores were 5.6 ± 2.7 and 2.0 ± 1.1 for the modafinil receivers and placebo groups, respectively (P = 0.001).
Conclusions: The results of the current study showed that modafinil was well-tolerated but not effective in reducing the level of consumption (number of negative urinary tests for amphetamine-methamphetamine). Modafinil was effective in reduction of severity addiction to amphetamine-methamphetamine.
Abstinence of methamphetamine after long and continuous term of consumption results in dysphoric syndrome, feeling weak, inability and lethargy, anxiety, nightmare and sleep disturbances, headache, profuse sweating, muscle and stomach cramps, and increased appetite in patients (6, 11). The withdrawal symptoms reach their peak in two to four days and decline in a week. The most serious withdrawal symptom is depression, which may be accompanied by suicidal thoughts and behaviors (7).
The fact that there is no acceptable medical therapeutics for many symptoms of methamphetamine consumption, may be due to lack of knowledge regarding underlying cellular and molecular mechanisms inducing psychosis that result from addiction and dependence on methamphetamine (9). Development in methamphetamine pharmacotherapy is still in its early stages and no clear evidence of treatment effectiveness is observed (3, 12). New studies have been conducted on the effectiveness of modafinil to treat amphetamine dependence (13-16), and some studies reported the effectiveness of this substance to treat patients addicted to amphetamine and methamphetamine (13-15). Modafinil contains alpha-1 adrenergic properties and increases the level of awareness, but is chemically and pharmacologically different from stimulants of the central nervous system (CNS); however, its exact mechanism of action is still unknown. There are several reasons for use of modafinil to treat methamphetamine dependence. These reasons are as follows; stimulant properties of modafinil could be therapeutic for alleviating some stimulant withdrawal symptoms, could be used to attenuate reinstatement of metamphetamine self-administration in animal testing, has lower abuse potential than methylphenidate or amphetamine, improves cognition and mood, and it has been used in trials of treatment for cocaine dependence and proved to be safe and well tolerated in almost all studies (13). Modafinil does not cause dependence and is associated with very low incidence of symptoms such as headache, vomiting, anger, anxiety, insomnia, nasal allergic inflammation, diarrhea, backache, dizziness, indigestion, flu, dry mouth and anorexia; most of the patients use it with no problems (13, 15). Some other studies have not confirmed its effectiveness (12).
Due to increase in abuse and dependency on amphetamine/metamphetamine, low effectiveness and contradicted of drug treatments of amphetamine dependency in previous researches, and need to more researches for treatment of amphetamine/metaphetamin dependency (3), the current study aimed to evaluate the effect of modafinil on amphetamine dependence treatment.
The current double-blinded clinical trial was conducted on 50 male patients, aged 18 to 65 years and addicted to amphetamine or methamphetamine on methadone maintenance treatment (MMT). Patients on MMT with positive urinary amphetamine-methamphetamine test were selected. Patients were assessed for major mental disorders; with the Mini international neuropsychiatric interview (MINI). Then, selected patients were randomly assigned into two groups, based on blocks permutation. First, the aim of the study was clearly explained to all subjects and they signed a written informed consent. The subjects could leave the therapeutic program whenever they wished.
Inclusion criteria were Methamphetamine dependency based on diagnostic and statistical manual of mental disorders, 4th edition, text revision (DSM-IV-TR) criteria, male gender, eighteen to sixty five years of age, positive urine test for amphetamine and methamphetamine, no contraindication for the use of modafinil.
Exclusion criteria were abuse of different kinds of drugs except for nicotine and methadone, history of any other psychiatric disorders in Axis I except depression, having medical conditions that may interfere with the use of modafinil, having suicidal thoughts and aggression, participating in another study simultaneously, dropout from the current study. The subjects, who did not use drugs for six consecutive doses or showed threatening or aggressive behaviors and had drug overdose were excluded from the study.
This study was done on patients aged 18 to 65 years, who were addicted to amphetamine or methamphetamine that referred to Baharan Psychiatry Hospital in Zahedan, Eastern Iran, from 2015 to 2016.
Fifty individuals, who met the DSM-IV-TR criteria for methamphetamine dependence, participated in this twelve-week trial. The subjects were randomly divided to two groups including modafinil and placebo. The participants received 200 mg of modafinil or the placebo daily until the end of week 12. Blood pressure monitoring and evaluating the adverse effects were performed weekly. Urinary screening for methamphetamine and drug craving scoring were conducted weekly and at the beginning of the study. Each subject should have had 12 laboratory results for amphetamine-methamphetamine urinary screening and non-co-operation for Amphetamine urinary screen was considered as a positive test.
Our primary outcome was methamphetamine craving. To evaluate the level of drug craving, the Visual Analogue Scale of Craving was used and the subjects were asked to score their drug craving level from 0 to 100; 0 was the lowest and 100 was the highest level of drug craving (17).
The Addiction Severity Index (ASI) is a structured interview to evaluate the severity of addiction, which assesses medical, occupational, legal and psychological status and also the level of drug used by the patient; higher score means higher levels of addiction severity (18, 19). The questionnaire was completed during weeks zero, six and twelve of the study.
To assess subjects for mental disorders, the mini international neuropsychiatric interview (MINI) was used, which contained 16 sections each including a certain number of yes/no questions. Subjects with concurrent diseases were identified by the MINI questionnaire and excluded from the study. The reliability and Cronbach’s alpha coefficients were 0.76 and 0.89, respectively (20).
Generation of randomization codes was conducted by permuted randomization blocks using the Excel software. Randomization was performed by an independent person, who was not involved elsewhere in the trial. Concealment of allocation was performed using sequentially numbered, sealed, opaque and stapled envelopes. Separate people were responsible for generation of randomization codes, treatment allocation and interviewing. The patients, research investigators and interviewers were all blinded to the treatment allocation. Modafinil and placebo were completely identical in their size, color, shape, texture and odor.
Differences between the groups were reported as mean differences (95% confidence intervals (95% CI)). All analyses were based on the intention-to-treat sample and were performed using the last observation carried forward (LOCF) procedure. Comparison of score change from baseline to end point between the two groups was done using the t-test. P < 0.05 was considered significant.
The mean age of the subjects was 29.5 ± 6.4 years. Subjects were evaluated based on their marital and occupational status, with most being married and self-employed. There was no statistical significant difference between the two groups (P > 0.05). It is noteworthy to mention that one participant in modafinil group was excluded from the study on eighth week and two subjects in the placebo group were excluded from the study on seventh week and eight due to simultaneous use of opioid. Hence, the study was completed with 24 and 23 subjects in the intervention and the placebo groups, respectively (Figure 1).
Urinary screening for amphetamine/methamphetamine was conducted every week, and 288 and 276 test results were registered for the modafinil and placebo groups. The mean positive urinary screening for the modafinil and the placebo groups were 6.3 ± 1.1 and 6.6±0.8 cases, respectively; no statistical significant difference was observed between these values (P = 0.327) (Table 1 and Figure 2).
The mean score for drug craving in the modafinil and the placebo groups were 76.2 ± 9.0 and 81.0 ± 8.2, respectively. The mean score of drug craving in the modafinil group was decreased, which was not statistically significant (P = 0.064) (Table 1 and Figure 3).
The mean scores of addiction severity between the two groups at the beginning of the study and the sixth and twelfth week were statistically insignificant; but the level of decrease in the addiction severity were 5.6 ± 2.7 and 2.0 ± 1.1 in the modafinil and the placebo groups respectively, with statistically significant differences between them (P = 0.001) (Table 2 and Figures 3 and 4).
Since no medical treatment has been approved for amphetamine methamphetamine dependence so far, the current clinical trial aimed to evaluate the effect of modafinil on the treatment of methamphetamine dependence in 50 male patients addicted to amphetamine and methamphetamine. The mean age, marital and occupational statuses of the subjects were similar in the two groups of the study. There was no statistically significant difference in dropout rate between the two groups (one participant in modafinl and two participants in placebo group dropped out). Also, 52.8% and 55.1% of the subjects in the modafinil and placebo groups, respectively, had positive urinary test for methamphetamine; there was a trend of reduction in Modafinil group, however this was not statistically significant. The mean drug craving score for modafinil group was 76.2 ± 9.0, which had a significant decrease compared with that of the placebo group (81.0 ± 8.2), yet the level of decrease was not significant. The mean score of addiction severity significantly decreased in the modafinil group in the last week of the treatment compared with the first week in comparison with the placebo group. The level of decrease in addiction severity in the modafinil and placebo groups were 5.6 ± 2.7 and 2.0 ± 1.1, respectively, which was statistically significant (P = 0.001).
The use of amphetamine and methamphetamine was evaluated in subjects through urinary screening for amphetamine/methamphetamine. Results of the urinary tests showed that the level of positive test in the modafinil group was lower than that of the placebo group, but insignificantly. In a similar study by Heinzerling et al. (2010) (14), modafinil 400 mg was used to treat methamphetamine dependence. It was shown that modafinil 400 mg had no effect on decreasing the level of methamphetamine dependence, compared with the control group, which is inconsistent with the results of the current study.
Anderson et al. (2012) (13) conducted a similar study on 210 patients in three groups (68 in the control group, 72 in 200-mg modafinil receivers and 70 in 400-mg modafinil receivers) regarding the treatment of methamphetamine dependence using modafinil. Results showed the effectiveness of modafinil on decreasing the number of positive urinary tests. Results of the current study also showed the decrease in the number of positive urinary tests in the modafinil group, but the decrease was insignificant compared with that of the placebo group.
Miles et al. (2012) (21) evaluated the effect of methylphenidate on 79 patients addicted to amphetamine/methamphetamine in Finland and New Zealand. The results showed that methylphenidate had no effect on decreasing the number of positive urinary tests, compared with that of the placebo group. The study by Rezaei et al. (1) in Iran reported the effectiveness of this drug on decreasing the number of positive urinary tests, which is compatible with the results of the current study. There is no evidence that the risk of dependence on modafinil is lower than methylphenidate and this drug (methylphenidate) was effective on decreasing the addiction severity index and the drug craving score.
In a study by Shearer et al. (2010) (15), eighty patients addicted to methamphetamine were divided to two groups of 200-mg modafinil and the control. Results of this study showed that the number of negative urinary tests in the modafinil group was lower than that of the placebo group. Also, no adverse effect was reported regarding the use of modafinil; results of the study were similar to the current study.
There are some evidences about the mechanism of Modafinil. Gonzalez et al. (2014) (22) in Argentina reported the effect of modanifil on the treatment of cognitive impairment caused by methamphetamine, and the revival of intracellular signaling prefrontal cortex in rats. Considering the decrease in drug craving and level of dependence, modafinil showed better results compared with other substances. Since the study was conducted during a short period of 12 weeks and the level of patients’ withdrawal from the study was low, better results may be obtained from longer-term studies.
This study had some limitations. The sample size was relatively small and consisted of only males. Larger studies are required to replicate our findings. The effectiveness modafinil beyond 12 weeks of treatment remains unknown.
Modafinil is a medicine with no serious adverse effects and is well tolerated by patients. Modafinil could be effective in decreasing the level of drug craving and addiction severity in patients. |
This year marked the 40th anniversary of the National Smile Month; the campaign reaches more than 50 million people and since its inception, it has become one of the most effective reminders of the importance of good oral health. However, not everything is still perfect; in fact, around 25% of people admit they do not brush their teeth twice a day (including 33% men). A good smile isn’t just an accessory; it is also an indicator of your overall health. Poor oral health is associated with serious health issues such as stroke, diabetes and heart disease.
If you are looking for some tips on oral hygiene, you cannot go wrong with Martha Stewart. The 74-year old lifestyle guru is serious about her dental health – a couple of years ago, she live-tweeted her visit to the dentist in order to take the mystery and the fear out of the dentist for all of her followers with dental phobia. The author of “Living the Good Long Life” has a lot to say on the topic of dental hygiene, so let’s take a look at some of her most useful tips.
Most people, who own a manual toothbrush, don’t even know how to angle the bristles properly. So instead of gentle up-and-down strokes, many people use horizontal back and forth motion. In most cases, this technique erodes the gums and speeds up tooth wear. According to Ms. Stewart, the electronic toothbrush does the work for you, so if you want to play it safe, you should definitely invest in a sonic model.
According to Emanuel Layliev, D.D.S. (New York Magazine named him “the Best in New York” for teeth whitening), in order to keep your teeth white, you should avoid green or blue-tinted mouthwashes. Colored mouthwashes contain dyes that lead to staining, especially if they are loaded with alcohol, which makes your teeth more prone to darkening. In addition, as strange as it may sound, Layliev recommends that you drink coffee and red vine through a straw, since these treats contribute to yellowing.
Let’s make things clear; brushing is still the best way of keeping your teeth clean. However, if you ever find yourself in a situation where you forgot your brush, but you still need your teeth cleaned, there are certain foods that can naturally scrub your smile. Layliev explains, if you can’t brush, simply munch on fibrous fruits and vegetables, such as lettuce, apples, broccoli and spinach. Moreover, for fresher breath, grab a couple of all-natural, bacteria-fighting herbs like cilantro, parsley and mint.
From Visiting A Dentist In Chatswood for a regular checkup, I learned that if a cavity or a cracked tooth isn’t the cause of your sensitivity, worn enamel might be the problem. Worn-down enamel, bleeding and receding gums are often caused by brushing too hard. If you feel pain every time you snack on something cold, a trip to the dentist is necessary. The doctor will apply enamel-strengthening fluoride gel to quell your discomfort, or seal the tooth root if necessary.
If you have consumed anything acidic (lemon, grapefruit, lemon, etc.) you should avoid brushing your teeth for half an hour. However, after drinking a beverage that contains phosphoric acid (sodas and fruit juices) you should wait at least an hour before brushing. The acid weakens the enamel and immediate brushing will remove microscopic amounts of that enamel layer. Instead, drink a couple glasses of water to dilute the acid and then brush up later. |
In this month’s letter, ADEA President and CEO Dr. Rick Valachovic begins a two-part consideration of the merits and challenges of pass/fail grading.
Although the number of pass/fail dental schools has remained fairly stable over the decades, the concept itself has gained renewed attention in recent years, most notably through the National Board Dental Examinations’ (NBDE) 2012 move to pass/fail grading for dentistry and dental hygiene licensure exams. While this switch has been generally well received, especially by students, the shift away from a graded exam has raised some questions. Chief among these are: How can advanced dental education programs best evaluate candidates in the absence of a numerical grade on a standardized exam? And, What are the implications of pass/fail board exams for students who count on high exam scores to gain entry into competitive programs?
This month, I want to examine these issues from the point of view of students, especially those attending pass/fail schools. Next month, I will take a deeper look at the concerns of advanced dental education programs and share some of the ways ADEA is assisting them in adapting to the new testing landscape.
Most of you will remember that the Joint Commission on National Dental Examinations (JCNDE) elected to move to a pass/fail grading system because of concerns about the misuse of exam scores and the security of the questions on the NBDE. The purpose of the NBDE is to help state boards determine whether individuals are qualified to obtain licenses to practice dentistry or dental hygiene. The JCNDE has made clear that the exams are not valid instruments for determining differences in knowledge and ability among test takers who score within the range of passing grades. Nevertheless, board scores were widely used in the past to screen candidates for admission to advanced education programs or even to rank predoctoral programs. By moving to pass/fail grading on the NBDE, the JCNDE put an end to the misapplication of scores for these alternate purposes.
As it happens, I attended one pass/fail school (University of Connecticut School of Dental Medicine) and later taught at another (Harvard School of Dental Medicine [HSDM]). At pass/fail schools, students receive numerical grades on their assignments just as they would elsewhere, and faculty average these to determine a grade for each course. The difference comes in how these grades are reported. Typically, schools set a cutoff for passing, and sometimes grades above a higher threshold receive an honors designation.
I didn’t consider each school’s grading policy when I decided where to attend dental school, and grading is not necessarily a paramount concern for students choosing schools today. But in talking with two students on the ADEA Council of Students, Residents and Fellows (ADEA COSRF) Administrative Board, I learned that grading does factor into the equation for some students when choosing where to earn their predoctoral degrees.
Alex is in his fourth year at the University of California, San Francisco, School of Dentistry (UCSF SOD), where he says he has found the teamwork and cooperative learning environment he was seeking. He has also found opportunities for recognition within the school’s pass/fail framework. “Everyone is good at different things—perio, prosthodontics, hand skills—so it is not always the same people earning honors,” Alex told me, and, he added, honors can be earned through initiative as well as through performance. Work in the community or leadership activities, such as participation in ADEA governance, can also garner honors recognition at UCSF SOD.
From my perspective, the pass/fail approach has a lot to recommend it. By eliminating the competition for class rank, pass/fail grading creates an environment that is conducive to learning rather than rote memorization. Exams are designed to assess each student’s competency rather than to assess students’ achievements relative to one another. When people talk about the downsides of pass/fail grading, I often hear others point to the need for students at pass/fail schools who want to pursue advanced dental education to find ways of distinguishing themselves in the absence of numerical grades.
In 2012, a group of researchers at HSDM surveyed students to try to quantify the impact of the move to a pass/fail NBDE, and they presented their findings at the 2014 ADEA Annual Session & Exhibition. By a ratio of 3-to-1, survey respondents felt the move to pass/fail grading decreased their chances of getting into a specialty residency, and 80% wanted another objective measure to differentiate themselves to specialty program directors.
Cameron Reece, a fourth-year dental student at the Roseman University of Health Sciences College of Dental Medicine – South Jordan, Utah (Roseman CODM), is among those students who favor the creation of a new test designed specifically for graduate admissions purposes. Roseman CODM is one of eight dental schools that currently use pass/fail grading.
“I think something is needed—an exam or a standardized portfolio—to give us a way to show how good we are as students,” Cameron told me, and he believes a lot of Roseman CODM students share this view. Cameron was drawn to Roseman CODM by the school’s innovative curriculum and his belief that he would thrive in a pass/fail environment. He credits pass/fail grading with allowing him to focus his energies on learning and to achieve more than he would have otherwise. But he also feels that pass/fail grading puts him and his classmates at a disadvantage in applying to advanced dental education programs.
While students and others deliberate this question, the American Dental Association is developing an admissions test for advanced dental education programs, which it expects to release in 2016. The precise scope of its content is still unclear, but the test will likely suit some programs better than others. It will be up to individual programs to decide whether or not the test will be a valuable admissions tool to add to those they already use.
Meanwhile, it’s important to remember why pass/fail systems emerged in the first place. They came about because most of us agree that (1) numerical grades are not necessarily reflective of the competencies needed to be a successful professional or resident, and (2) selection for advanced dental education programs should be based on additional qualities other than the ability to earn a high score on a high-stakes exam.
Developing the attributes graduate programs are seeking—commitment, compassion, leadership and teamwork skills—enriches students and, ultimately, the profession. Next month I will talk about the challenges that advanced dental education programs face in selecting the best candidates for their programs. |
After the assassination of President John F. Kennedy on Nov. 21, 1963, a mandate was issued, at 2 p.m. Mountain Time, to close the borders to vehicle and pedestrian traffic. Six hours later, that order was rescinded and U.S. Immigration officials used the Departure Control Law, authorizing the prevention of any alien or U.S. citizen from leaving the country.
The Mexican government also issued an order to close the border.
American citizens who had been trapped in Juárez when the order was issued were permitted to cross into El Paso after showing proper credentials.
The Departure Control Law closure of the border remained in place until President Kennedy’s killer, Lee Harvey Oswald, was arrested.
Operation Intercept: Sept. 21, 1969, customs inspectors were flown to the border from New York, Philadelphia and New Orleans as the U.S. sealed the border to everything that moved across the international boundary in an all-out fight to stop the smuggling of marijuana from Mexico.
Agents worked eight-hour shifts around the clock, seven days a week, without a day off, searching cars to the grease joints, under the hoods, in trunks, in window frames, under the seats and fenders. Luggage and passengers were carefully checked.
The operation resulted in traffic jams at the 31 border crossing points, causing delays of up to seven hours before any vehicle traffic could enter the U.S.
Three weeks later, the Mexican government announced 900,000 marijuana plants and 16 poppy fields were burned. It reported that 50 Mexican air force spotter planes were working in the states of Sinaloa, Nayarit, Jalisco, Michoacán, Guerrero and Chihuahua.
After this display of cooperation, the pressure against the border was eased by the U.S. and Operation Intercept was replaced with Operation Cooperation on Oct. 9, 1969.
Operation Camarena: On Feb. 7, 1985, U.S Drug Enforcement Administration Agent Enrique “Kiki” Camarena Salazar was kidnapped and murdered while on assignment in Mexico. In retaliation, the U.S. Customs Service immediately launched “Operation Camarena” at 21 border checkpoints, closing nine of them and instituting time-consuming scrutiny of every vehicle at the ones that were left open. The result was two weeks of traffic jams and delays of as much as 12 hours for anyone crossing the border into the United States.
There was immediate and continuing Mexican protests, but they fell on deaf ears in Washington. Commerce across the border was grinding to a halt.
Mexican President Miguel de la Madrid Hurtado made a personal call to President Ronald Reagan. Reagan accommodated de la Madrid and within hours the border crossings were opened again.
9/11: All the main crossing points on the 2,100-mile border remained opened after the attacks, but customs and other agents were rushed to the border, where stricter searches delayed crossings.
The U.S. intensified inspections and anti-terrorist surveillance along its Canadian and Mexican borders, with more inspectors asking more questions. Bridge crossing times went from minutes to up to 15 hours.
Inspections at border crossings remain at a higher level than before 9/11. |
We measure and analyses energy use from individual appliances such as refrigerators and hot water heaters to entire households and local government areas.
Analysis of electricity and gas bills for typical households in Sydney and Melbourne.
The analysis showed that for the financial year 2011/12, small to medium users of electricity are more likely to pay less in Sydney on their electricity bills than in Melbourne. While large electricity users are more likely to play less in Melbourne than Sydney.
An examination of energy use, greenhouse gas emissions and electricity demand in multi-unit residential buildings, and comparison to other residential building types.
An examination of the reasons for differences in household electricity consumption and demand in a group of similar medium density residences at Newington. |
The popularity of Python Programming has increased tremendously over the past few years. It has come a long way since it was first released in 1991. Today, the IT industry is making a paradigm shift and many Python jobs are now available to jobseekers. We can easily check this trend from Google Trends. The below graphs shows the popularity of Python programming at all time high in the past 5 years.
This obviously leads us to our main topic, i.e., Python programming can lead to your next job. The general purpose ability of Python programmings helps to develop various applications. These application can be websites, data analysis, numerical programming, scientific calculations, AI, machine learning, system programming and so on. There are various jobs available which needs Python programmers. If you are a job seeker and looking for your a job, then learning Python can define your future career path. The latest job trends in India for Python developer vs Java developer clearly shows the demand for the former is increasing while the latter is continuously decreasing.
We at APLC conduct classroom coaching for Python programming. This course is suitable for final year B.Tech. students, first time job seekers and candidates who want to enhance skills for better jobs. According to the data compiled using various job portals, we found that for every single Python job, there are 15 competing candidates. Comparing this to Java job, there are roughly 300 candidates. In short, Python job has 20 times less competition than Java job. |
Welcome to the amazing Sunalta School.
Why is it amazing you ask? For many reasons, I am sure, but here is one.
Grade 6 is a challenging year, with standardized testing often setting a tone of “must-do” in classrooms across Calgary. But at Sunalta, the Grade 6 teachers facilitate something remarkably creative: students build a cardboard castle in the classroom! (Totally rad.) Everything they do is linked to curriculum including math and geometry.
This is the kind of atmosphere that Principal Marie and her amazing teachers have worked to create in the school.
And that’s the atmosphere that we walked into on Day 1 of the week long residency program.
Students were so excited, many of them recalling last year’s escape rooms and mimicking clues that they remembered.
Candyland was built by the grade 1 and 2 students. They enjoyed making colourful creations, across different lands. Escapers were tasked with finding gingerbread pieces that would combine to make a password, granting access to King Candy’s Castle!
The Lego room was built by the Grade 3 and 4 students. They created giant blocks, minifigures and miscellaneous accessories. Escapers needed to retrieve power crystals to reboot their spaceship and get back to earth.
And the Grade 5 and 6 students built Azkaban prison from Harry Potter. Little is know about the prison in the books, so students used their imaginations including a Basilisk, Dementors, a hidden room and more. The final code for this escape room was both ingenious and tricky! |
How much to raise a child today you ask? Well, the U.S. Department of Agriculture released its annual report, Expenditures on Children by Families, finding that an average middle-income family* with a child born in 2010 can expect to spend about $226,920.00 ($286,860.00 given an estimated annual inflation of 2.6 percent factored in) for food, shelter, and other necessities to raise that child over the next 17 years.
This is the 50th year the USDA has issued its annual report on the cost of raising a child. FYI: In 1960, the first year the report was issued, a middle-income family could have expected to spend $25,230 ($185,856 in 2010 dollars) to raise a child through age seventeen.
You can check out the USDA’s Cost of Raising a Child Calculator to estimate how much it will annually cost to raise a child.
*Nationally, for this report the USDA defines an “average middle-income family” as earning between $57,600.00 to $99,730.00 in annual before-tax household income. The report covers expenditures for major budgetary items estimated in this study consisting of any direct parental expenses made on children through age 17. These expenditures exclude college costs and other parental expenses on children after age 17. |
A development economics of finitude?
Just a very quick thought today. After reading Charles Kenny’s Getting Better and skimming Owen Barder’s “Can Aid Work?“, I’m wondering if anyone else can hear the faint rumblings of something very important – here we have two people, hardly from the fringes of development thought, noting variously that 1) aid does not seem well-correlated with economic growth, and therefore a clear causal relationship is pretty hard to determine and 2) despite this, and in several cases in the absence of major economic growth, things seem to be getting better in a number of places (this second point is mostly Charles). In other words, are we seeing arguments against a focus on economic growth in development shift from the margins to the center of development thinking?
Those of us more on the qualitative social theory fringes of the field have long been arguing that the worship of growth did not make much sense, given what we were seeing on the ground. Further, the emergence of the anthropocene (the recent era of human dominated environmental events) as a direct outcome of more than a century of concerted efforts to spur ever-faster economic growth, calls into question the wisdom of a continued myopic focus on growth without a serious consideration of its costs and potential material limits. So if indeed we are seeing the beginnings of a shift in policy circles, I am thrilled. Nothing will change tomorrow, but I think these interventions might be important touchstones for future efforts to create some sort of development economics of finitude . . .
I think that both would still argue that growth is important, just that aid isn’t the means of generating growth. Ultimately for countries to pay for their own hospitals and universities they have to grow economically.
Further – Kenny’s arguments that falling prices for key goods were the savior is actually one about growth – productivity growth that just isn’t measured very well in national accounts.
The killer argument for me is that in a world without economic growth, life is a zero-sum game – my advancement can only come at a cost to you. And good luck convincing all 7+ billion of us that we don’t need to improve our lives any more.
These are great points – I am not totally sure I agree with your first point about needing growth for public services – states can choose how to allocate their productive funds, and how to gather revenue to facilitate the programs and projects they prioritize. In a lot of places there is unproductive investment that could be retasked. But I suppose you and I could have a long exchange about that – indeed, let me know when you next come through DC, and we will meet!
Your point about falling prices being about growth is interesting – but is increased efficiency the same as growth? Certainly, in a finite world increased efficiency would allow for the production of more stuff, so it would facilitate access to larger amounts of stuff that people need in their day-to-day lives (and wider access to that stuff), but is this the same thing as growth?
Your last point is a huge one – I agree completely. This, in the end, is the core challenge of sustainable development. If the Millennium Ecosystem Assessment was right (among any number of other assessments), we are running down our natural resource base fairly quickly. The apocalypse is not here yet, and is probably somewhat further off than many think (we could utilize ocean resources much more comprehensively and effectively than we do right now, for example), but sooner or later the Earth does impose limits on us. That, of course, is the inter-generational ethics issue inherent to sustainable development. However, given the unevenness of growth, the uneven distribution of resources on the planet, and remarkably uneven access to markets around the world (all of these are of course hugely intertwined), even today sustainable development imposes intra-generational ethics. We in the Global North are living way past sustainable levels . . . so the only real path to sustainability is the continued immiseration of lots of people, at least until we come up with clean, nearly free energy. Improvements in efficiency do happen, but not so quickly as to ameliorate this problem . . . at least not for a very long time.
I have not ever thought nor seen that “aid creates growth” idea. All of the aid projects I have ever seen or been involved with have been to create public goods. Public goods do not create growth. Capitalism creates growth.
So I have this simplistic model. Capitalism, where it’s working, creates growth. Aid cannot help it. It’s either there, or not there. Where it’s operating, in addition to growth, it creates inequality. This inequality is morally offensive to many, and politically and economically destabilising. Therefore the function of the State is to tax growth, and spend in either an equitable, or pro-poor way. The aim of this is out and out transfer of benefit from the capitalist growth engine to the those that miss out.
In the developing world, this State function is weak. Taxation is weak, service delivery is weak, administration is weak. The State may be captured. What aid then does is to undertake either directly (NGOs, some bilateral projects) or indirectly (multilats, other bilateral projects) the function of the State. In so doing it is effecting transfers from the taxpayers of the rich world to the poor of the developing world. And in so doing it (quite expressly, in many cases) serves to ameliorate the political instability and moral opprobrium of extreme poverty.
But the State does not and cannot create a growth economy. (China may provide a counter-example: but again, my model is simplistic.) It can, however, shut down capitalism. Eritrea, where I spent last week, is a good example of State that shut down capitalism.
Well, the aid/development begets growth argument has been core to development thought since the immediate post WW II era. Then again, I think that we are perhaps working at different scales – you at the project level, and me at the meta-level. Rostow’s stages of growth (late 1950s) was a roadmap of aid for growth. Modernization theory (60s-70s) was a more nuanced approach to the same idea. Etc., etc., but this is not to say that particular projects undertaken under either the “big push” or modernization approaches were themselves about growth – they may well have been to generate public goods, as many projects today aim to do – but when you bundle those projects together at the agency/funder level, you find that they are expected to add up to something, and at least part of that something is growth. Hey, USAID currently argues that one of its goals is to foster broad-based economic growth!
Incidentally, your Medicare example works here – basically, one could argue that a well-funded public health system in the US would foster growth by freeing small businesses (and larger businesses) from a particular direct cost (or at least lowering that cost), the savings from which could be reinvested, etc. Roads are public goods – and any infrastructural development coming from USAID is done, at least in part, as a component of a much larger growth agenda. I argue in my book that development became globalization a while ago (insofar as development becomes a means of furthering economic globalization by linking ever more people into global markets), but what I did not add was that this linkage is justified by the idea that growth is the only pathway to robust development gains that will stand the test of time . . . |
Strong storms may only take a few minutes to pass through, but the results remind us for days to come. While you’re cleaning up the yard, be sure to watch for blown off shingles or slate. If you want to take it a step further, use your ladder to get a closer look at the roof.
Check for damage to the ends of the shingles or to the gutters.
This is also a good time to check for loose or missing shingles.
Inspect the caulk at the chimney flashing. You’re looking for dried out or cracked caulking.
If climbing up to the roof isn’t what you’d like to do, call Showalter Roofing at 630-499-7700. We’ll check all of these points for you, take pictures so you can view the damage yourself and we’ll fix any damage caused by summer storms.
If there’s one thing we can count on are high winds coming at our homes from all directions. What can we expect from high winds? Shingles become loose and the loose ones end up in the grass or our driveway.
In our latest video, “Shingle Roof Repair on Windy Days” learn about the two shingle types: the standard 3 tab and the architectural shingle. Find out how the 3 tab shingle dries and curls allowing the wind to get under the tab and pull it away from the roofing system. Learn the benefits of an architectural shingle and how it adheres to the roof making it difficult for the shingle to pull away in high winds.
As we visit many homes…and rooftops, we’ve identified many roofs with the CertainTeed shingle with organic felt that are defected. It usually occurs around years 10 to 15. However, the warranty of the CertainTeed shingle is around 25 to 30 years. Fortunately, it is covered under warranty by CertainTeed.
Being insured and bonded can make all the difference in the long run.
When it comes to working on major projects around the house, the roofing system can be one of those projects that can cause many headaches if you don’t have the right team working on the project. It is highly recommended to make sure that the company or team of workers that are offering their services to perform work on major construction projects around your home is bonded, licensed, and insured. Often times, hiring someone that is not bonded or insured will be substantially cheaper and all the responsibility if something bad happens falls into the homeowners lap. Always remember that even though the price may be very appealing, you get what you pay for. Saving a few dollars up front ends up costing more in the long run. Here is a short story of a recent colleague that experienced the hardship of what it means to be bonded and insured.
Owners of a recently purchased bed and breakfast outside the Milwaukee area, experienced heavy rainfall this past autumn and were exasperated to the amount of rain that found its way into the home. Reflecting back on the negotiations of the sale of the house, the previous owners were supposed to have fulfiller their obligations by installing a new roof to the establishment. The problem was that the previous owners contracted a roofing company that was uninsured and un-bonded. The party responsible for the poorly installed roof is now on the previous owner and not on the roofing company that did the work. This makes more headaches and problems for the new owners to be able to resolve the situation in order to continue to make money as a bed and breakfast establishment.
It is very important that anyone you work with is licensed, bonded, and insured. Once you know that they are by researching and confirming that they are being truthful. It’s always a good idea to ask for referrals from a trusted neighbor, friend or co-worker. However, be sure they’re not a family friend, a guy that does side jobs or an unemployed worker trying to make ends meet.
Let’s begin a long-lasting relationship.
1. The Rooftop – This is the first line of defense when it comes to protecting the house and is considered the main component for keeping rainwater from entering the home. The construction of rooftops for best results should be pitched or sloped to direct water downwards and does not allow it to collect on the roof. Inspecting the entire rooftop for damaged shingles, cracks, and holes will assist in preventing further damage caused by heavy rain. If you find that your rooftop has developed holes or small cracks that will allow water to leak through or if you feel your rooftop is not pitched or sloped appropriately, Showalter Roofing Services would be happy to come out, take a look at your roof, and provide the best solution to your rooftop so you can prevent damage caused by heavy rainfall.
2. Chimney – The construction of the chimney should be sealed tightly together with no revealing gaps. A chimney cap may be a consideration to install to help prevent water from coming in through the chimney. Finally, inspect the seals and flashing of where the chimney is attached to the roof. If there are signs of damage, get it replaced or repaired as soon as possible. Some brickwork becomes porous over time allowing moisture to actually be absorbed into the brick causing leaks. This is especially true during several consecutive days of rain. Masonry sealant will solve this problem provided the brick and mortar are in good condition.
Remember, roof repairs can be dangerous, so if you have experienced damage to your roof during this fall rain and it is fairly severe, it is best to call in a professional roofing contractor such as Showalter Roofing Service to assess the damage and provide the necessary repairs. |
→ What effective web tools increase the fun factor in learning language outside of class?
Langchatters resoundingly chose the topic of “effective web tools” for last week’s chat and with the amount of them that exist these days (both effective and non-effective), it’s a good thing that we tackled the subject together! Participants talked about the web tools that students use that are the most fun, and which of those are best for improving their target language acquisition. The discussion then led to how you can make ‘flipped-classroom’ web-based lessons engaging, as well as ways to encourage students use technology to find and interact with authentic resources outside of class. Contributors closed out the session by discussing when (and in what situations) they’ve realized that web tools are not as effective, and therefore, not the right choice to enhance student’s language learning.
Question 1: What web tools do your students use that are the most fun?
The “fun” factor isn’t always applicable in the world language classroom but when it comes to web tools, it’s almost always a requirement if you want your students to be engaged. There are so many web tools available and student preference is varied, but there were tons of “fun” web tools suggested this week including options like Flipgrid, Kahoot, Quizlet Live, Padlet, Nearpod, Smore, StoryboardThat, PhotoPeach, VoiceThread, Duolingo, Quizzes, FlutentU, Hypothesis, Stackup, InstagramELE, Thisislanguage, ThatQuiz, Today’s Meet, Socrative, and GoNoodle, as well as more simple options like using class social media accounts (Instagram or Snapchat) or watching videos on YouTube.
With so many “fun” teaching tools being made available online (with more being released all the time), #langchatters agreed it’s important to do some research and find a few that work best for you and your students so that you don’t get overwhelmed by the sheer volume of options. Having a handful of fun, effective tools at your disposal that you know how to use well (and can rotate through) will be a much better strategy than having a bunch of tools you can only sort of use, since that will quickly limit the “fun” factor.
Question 2: Which ONE of the web tools that your students like are the best for their TL acquisition?
Question 3: How do you make ‘flipped classroom’ web-based lessons engaging and fun?
Flipped-classroom activities are getting more and more popular in the world language classroom, and as such, finding ways to make flipped-classroom, web-based lessons engaging and fun is an important task for WL teachers. There were lots of ideas shared for things teachers can do and suggestions included things like using EdPuzzlen, Nearpod or Blendspace, differentiating the type of technology used and the assignment type, making your own videos, setting up a Google classroom, or simply giving students a choice in the type of assignment they do for “flipped-classroom” tasks.
Overall, there seemed to be a consensus on this question that long as you are using creative tools and new ideas to “flip” your classroom outside of the obvious take-home-writing type of assignments, students will appreciate the effort and retain more than when the lessons aren’t web-based.
Question 4: How can we encourage students to use technology to find/interact with #authres outside of class?
Helping students understand that the real work of the WL classroom actually happens when they use the language outside of class is hard, but having technology to help them find authentic resources that they are actually interested in can help to bridge that gap. As @KrisClimer said, “This is where the word “fun” plays in. If they [students] enjoy the acquisition, [if it] feels good and [they] feel motivated, it [the technology] sells itself.” There were a lot of suggestions and one much-liked statements was that you can’t just have sites listed for them to go to on there own and “study”, but rather you need a list of concrete people, magazines, etc., that will interest them.
And really, the whole point of getting students to use technology outside of class is to help them connect with authentic resources and help them to feel like partners, not passive observers, in the language learning process. If they aren’t invested, then no real acquisition can take place.
Question 5: When/what situations have you realized that web tools are not as effective?
Also, several participants reminded #langchat this week that while we use it just about every single day, the Internet is not always reliable. It can (quite often) glitch out or not let you get to the technology resource that you intended to use – and when that happens, you have to have a Plan B! The majority of chatters agreed that we don’t to be too quick to throw out all our non-tech tools all at once because as great as technology is, it’s not the all-around answer for the WL classroom just yet. |
Prosthetic materials originally used to treat hernias were biological, fascia, tendons, muscles, but inconsistent results have led the scientific community to continue medical research. Synthetic materials were the next step but this met with a strong opposition from conservative surgery. Evidence of improving outcomes and constant evolution of these materials by forcing them now as the main therapeutic resource.
If initial monopolist currently polypropylene materials such as Gore-Tex TM bring the additional advantage able to be used in very large parietal defects in direct contact with the intestines completely biocompatible and especially adherent to them. It is true that polypropylene shows a big advantage, resistance to infection and the possibility of healing in perfect condition even if a contaminated wound, a granuloma thread for example.
Regardless of each mesh structure acts as a matrix, as a support, which builds lasting scar final major contribution to net the parietal defect elimination in the blood thus allowing sustainable recovery and postoperative pain-free. The biocompatibility of these materials is outstanding and integration among parietal structures is good so exciting after a while the only indication of this net is highlighting its ultrasound. Cost prices of these materials decreased progressively, this allowing, with excellent results while imposition prosthetic procedures as the main line therapist in hernias and eventrations.
Any subject related to the use of these materials intratamentul surgical herniated Morice as today are no longer justified in the face of overwhelmingly large number of cases resolved by these methods over the years consistently positive results speak for themselves away. |
China Tea develops annual purchasing plans and places purchasing orders based on the procurement mode of "producing according to sales, and placing orders according to sales". China Tea signs purchasing agreements with tea farmers to increase farmers' income.
China Tea strengthens the publicity and education on fertilization and drug use for cooperated tea farmers, popularizes fertilization after testing soil, so as to avoid blind and excessive fertilization; develops measures without public hazards to prevent and control diseases and insect pests, , and reduces the use of pesticides. While helping tea farmers to become rich, China Tea also endeavers to improving their agricultural knowledge, protects the ecological environment, and develops ecological friendly agriculture.
China Tea continuously strengthens credit communication with its suppliers, and jointly forms a social responsibility system and strategic alliance in the supply chain management. It strives to transfer company’s social responsibility in the supply chain effectively. When the supplier has difficulty or fails to meet the requirements, China Tea would like to provide necessary resource and technical support to help suppliers fulfill their social responsibilities.
China Tea holds a tea health and safety work conference each year to educate the suppliers in terms of company’s social responsibility, to focus on social responsibility, healthy development and long-term interests. It actively implements corporate social responsibility management, andundertaks responsibility of protecting environment and laborers' rights while pursuing profits.
Around the theme of tea’s health functions, China Tea researched the health functions of China Tea •100-year Wooden Warehouse dark tea of reducing hematic fat, reducing weight, and relaxing bowel, and standardized the fermentation process through strain development. China Tea’s patented strains obtained five items of national qualification authentication for genetic stability, drug resistance, and virulence.
China Tea has established a quality and safety management system for the whole supply chain from tea garden to tea cup, by successively passing the certification of ISO22000, ISO9001, ISO14000, ISO18000, HACCP, GAP, and other management systems. It innovatively adopted the unified management mode of "Outline for quality & safety risks control of the tea industry chain + Categorized control manuals", which has been operated effectively.
(1) China Tea compiled the Standards for Tea Industry Chain, covering nine links from product research and development, planting and harvest, preliminary & refined production and compression of compressed tea, procurement of raw and auxiliary material, inventory management of raw and auxiliary material, blending & processing, management of finished goods, inventory management of finished goods, circulation and distribution, established 60 measures of quantitative management and control, and standardized the construction of the industry chain.
(2) The tea garden base of China Tea passed the authentication of SAGP (Sustainable Agricultural Goal Project), with the content involving more than 100 indicators including corporate management system, tea garden ecosystem protection, water source conservation, comprehensive integrated management of agricultural products, soil conservation, comprehensive management of wastes, clean processing and production, agrochemical safe use, emergency plan and protection measures, long-term development planning, etc., covering nearly 20 laws and regulations such as the Law on Prevention and Control of Water Pollution, Energy Conservation Law, Environmental Protection Law, Pesticide Management Regulations. |
Pain, numbness, tingling, and weakness in your upper extremities may be a result of thoracic outlet syndrome (TOS). This condition stems from impingement on a network of nerves called the brachial plexus or impingement of the large blood vessels that accompany this bundle of nerves.
The brachial plexus is formed by nerve roots that branch off the spinal cord in the lower cervical spine and upper thoracic spine (C5-T1). The purpose of the brachial plexus is nervous system communication between the spinal cord and the upper extremities (eg, arms). Various nerves are formed in the brachial plexus and several nerves branch off this intricate structure. Nerves created from the brachial plexus supply almost all of the sensory and motor nerve flow to and from the shoulders, arms, hands, and fingers.
These symptoms may have an insidious (gradual) onset or can begin abruptly. The location of your symptoms will vary with the location of the nerve and/or blood vessel compression.
TOS is thought to arise from one of two predominant mechanisms: nerve compression or blood vessel compression. Thus, TOS is divided into two broad syndromes—those which produce mostly neurologic dysfunctions and those which produce mostly vascular dysfunctions. Sorting the two syndromes is clinically challenging and not always possible, but, nevertheless, TOS may be mostly neurologic or mostly vascular in etiology.
There's also another cause of TOS I'd like to mention: Rarely, but importantly, thoracic outlet syndrome is the result of a serious underlying disorder. An example of this is tumors in the upper lung, which can compress either the nerves or blood vessels of the thoracic outlet, producing symptoms of TOS.
I would also like to point out that while certain treatments, such as chiropractic and yoga, can be effective for treating TOS, there is no scientific evidence that GYROTONIC exercises are helpful in treating TOS. In addition, GYROTONIC exercises appear to require proprietary and potentially expensive equipment.
When TOS-like symptoms arise, a more serious pathology must always be ruled out as the source.
Nerve slide exercises are simple exercises that one can learn from a trained medical professional and perform anytime during the day.
GYROTONIC exercise must be taught and supervised by a trained professional using specific equipment. |
ZAATARI CAMP, Jordan – The manager of the region's largest camp for Syrian refugees arranges toy figures, trucks and houses on a map in his office trailer to illustrate his ambitious vision. In a year, he wants to turn the chaotic shantytown of 100,000 into a city with local councils, paved streets, parks, an electricity grid and sewage pipes.
Zaatari, a desert camp near Jordan's border with Syria, is far from that ideal. Life is tough here. The strong often take from the weak, women fear going to communal bathrooms after dark, sewage runs between pre-fab trailers and boys hustle for pennies carting goods in wheelbarrows instead of going to school.
But with Syria's civil war in its third year, the more than 2 million Syrians who fled their country need long-term solutions, said Kilian Kleinschmidt, who runs Zaatari for the U.N. refugee agency.
"We are setting up ... a temporary city, as long as people have to be here," said Kleinschmidt, a 51-year-old German. The veteran of conflict zones is getting help from urban planners in the Netherlands.
Many in Zaatari residents acknowledge, if reluctantly, that a quick return is unlikely.
"At the beginning, we counted (our exile) in months, then years, and now maybe decades," said Khaled Zoabi, in his 60s, drinking tea and smoking with other refugees in a trailer-turned-men's social club.
Signs of refugees putting down roots are everywhere, just 15 months after Jordan opened the camp.
Many tents have been replaced with trailers, with satellite dishes installed on roofs. Refugees have started hundreds of businesses, offering anything from semi-automatic washing machines and haircuts to freshly baked pastries and ground coffee. The camp has three schools, two hospitals and a maternity clinic.
Each day begins before dawn with calls to prayer echoing across the flat land. Desert nights are cold, and in October, two U.N.-issued blankets per person aren't enough. Kleinschmidt hopes to move more refugees from tents into the warmer trailers before winter.
On a recent morning, four men sat waiting around a trash fire near the arrivals area. On the way were relatives fleeing the rebel-held Ghouta district near Damascus, under siege by President Bashar Assad's troops.
One of those waiting, 18-year-old Malik Salim, made the journey a month earlier, driven from Ghouta by hunger and regime shelling. Men caught at Syrian army checkpoints risk arrest or death, he said.
Dusty and dazed, the newcomers — often in the hundreds each day — line up for U.N. blankets and tents.
Mahmoud Joumma, 39, stood with his wife and two boys, five and 10 years old, by a pile of blankets. They lost their home in Syria's central city of Homs last year in government shelling and for months sheltered in abandoned apartments. With shelling worsening, they decided to head to Jordan, a four-day journey.
Joumma, a former bus driver, said he hopes Assad and the opposition can reach a political deal. "If they don't, God curse them both."
As newcomers settle in, veterans begin their morning routine.
The camp's five bread centers open at daybreak. About 500,000 pitas are handed out daily — four per person.
At the largest center, near the main gate, women and girls enter on the left, men and boys on the right. Each hands a yellow ration card through a metal divider and receives bread.
Bread is free, as are rice, bulgur and lentils. Each person also gets six dinars ($8.5) worth of food stamps every two weeks. With that, they buy eggs, milk and chicken and groceries at markets that redeem coupons.
Refugees have created their own camp economy, but its rules are murky. Gangs of thugs have arisen to control some dealings, including a black market in U.N.-issued supplies, Kleinschmidt said.
Camp residents earn money by providing goods and services, from selling homemade pudding to school children to telling fortunes from coffee cups.
Money gets injected into the camp economy from the cash refugees managed to bring with them, sent to them by relatives or from business partnerships with Jordanians.
Another source of money: the camp employs 1,500 cleaners and orderlies, for a dinar an hour. The jobs are rotated every two weeks. Street leaders — put in place by residents — choose who gets them, and many complain of favoritism.
There's also a thriving business in electricity, land, tents and trailers.
Some 350 refugees with technical skills have illegally diverted electricity from the public lighting system to about 70 percent of the households, charging for hookup and maintenance, Kleinschmidt said. The "electricity ministers," he calls them, tongue-in-cheek.
The grid is haphazard. Overloaded transformers sometimes explode. In the end, the U.N. foots the electricity bill to the Jordanian government — about $500,000 a month, likely to reach $700,000 in the winter.
No refugee owns the land — but they do sell it, especially spots in the downtown market where shop stalls line what the residents call Main Street and Saudi Street. Businesses there are bought and sold for hundreds of dinars.
Those leaving the camp sell their trailers for 300 to 500 dinars apiece — or up to $700. Kleinschmidt says foreign donors, including Arab Gulf states, suspended trailer distribution three months ago, in part because they wanted reassurances that police prevent their sale outside the camp. He said the distribution was to resume in coming days. With some 4,000 families still in tents and more arriving, demand for the 18,000 trailers already in the camp is high.
Wealthier merchants live in relative comfort.
Anas Masri, 33, owns a fruit and vegetable stall on Main Street. He said he makes as much as he did with a similar shop in Damascus — enough to buy four trailers in Zaatari for his family of 10.
In contrast, Mariam Bardan, her husband Khaled and their four children still live in a tent, 11 months after arriving. Six people share four mattresses and wash in an annex of corrugated metal. Rats enter the tent.
Khaled, 44, recently got his first camp job as a street cleaner.
The 43-year-old Mariam gets up at 6:30 a.m., picks up the day's bread and walks her three daughters, aged seven to 13, to school, while her 20-year-old son looks for work. When the girls return at 11:30 a.m., Mariam fixes bread, olives and white cheese. The girls do homework, watch TV or accompany Mariam to visit relatives.
Mariam cooks dinner in a communal kitchen. The day's dinner is chicken stew, a rare break from the monotony of lentils and bulgur made possible by the food vouchers.
Mariam shudders at the idea of being a refugee for years, like the Palestinians.
"We can't stand living here forever," she said. "With God's will, we won't stay here more than a year."
Many Zaatari residents come from conservative rural areas where families are large, conflicts are settled by tribal elders, and girls marry in their mid- to late teens.
The trauma of war and tough camp conditions have strained social ties and raised tensions.
Stabbings and fistfights were frequent a few months ago, Kleinschmidt said, though they have subsided.
Girls seem more vulnerable to being pressured into marriage to ease the financial burden on their families.
Jordanian men sometimes tour the camp, asking for potential brides who would accept a lower dowry than Jordanian women.
They often ask at a bridal shop on Main Street run by Sarah Abu Zeid, 19, and her brother Yousef, 18. "In Jordan, it's expensive to get married," said Sarah.
She said she knows of several Jordanian-Syrian marriages that ended in quick divorce, suggesting the Jordanian men exploited the women.
Most in Zaatari no longer agree to such matches, said Sarah. "They think we are sheep," she said of the prowling men.
White gowns with glittery beads hang from a rack in the bridal shop. Sarah and Yousef charge 30 dinars for hair, makeup and dress rental, sometimes dressing several brides a day. Their low prices have even attracted Jordanians from the nearby town of Mafraq, already dwarfed by Zaatari's population.
Camp weddings are marked by quiet family gatherings. Celebrations are frowned upon because of the war, said Sarah.
She has rejected several marriage proposals. "I don't want to raise children in this environment."
Kleinschmidt, who has been posted previously in Somalia and Pakistan, said Zaatari has been his toughest assignment. When he came in March, "it seemed overwhelming because of the level of violence, which I thought was really shocking," he said. "That is not the case anymore today."
He's trying to balance between enforcing some structure and not imposing too many restrictions.
"The overall approach, also chosen by the Jordanian authorities, is not a full enabling environment but at least not a prohibiting environment," he said.
Zaatari remains like a favela, or Brazilian slum, he said, often with the "strong prevailing over the others."
But traditional community leaders are beginning to reassert themselves over thugs, he said.
Kleinschmidt is starting to set up neighborhood councils in the camp's 12 districts, where Jordanian authorities, community police and refugees would handle local problems. It's a balancing act, he said, because he doesn't want to spook his Jordanian partners by suggesting a permanent city is being built. "This is a very fine line we all have to grapple with," he said. "How do you find that balance between making life comfortable, making people accountable for what they are doing, but also making sure that they will be able to leave."
The camp boss is working with the Association of Municipalities in the Netherlands on a plan for Zaatari, including self-governance, a proper electricity grid, water and sewage networks, more paved streets and even green areas. At some point, camp residents who have some income would have to start paying for utilities.
"It empowers them to return as responsible people in dignity, and the dependence syndrome is reduced," he said. |
Back Checking Data is a method employed by researchers to aid in the quality control procedure of the data and of the responses gathered during market research projects. After the data is gathered by conventional methods such as online surveys or telephone interviews, the participant is contacted by the researcher. The researcher proceeds to question the participant about the interview or survey which they had already taken part in, and checks to see whether their new answers correlate with those in the previously gathered data.
By carrying out this process of Back Checking Data, researchers can authenticate their data, as they can be certain that the responses provided during the online survey or telephone interview were indeed provided by the participant.
Back Checking Data is carried out during every market research project within DJS Research, as it can be a very important tool for ensuring the quality of data and responses. This can be a particular issue during recruitment for online research, as it is very difficult to initially verify that the participant is who they say they are. Back Checking Data can therefore help ensure that the sample of participants is representative of the target population.
It is crucial to the research output that the participants’ information is accurate. If data is believed to have been collected from a particular profile of participant, but it has actually been collected from someone of a very different profile (or not within the target audience), this could potentially alter the conclusions drawn from the data by the research team. Circumstances in which this could happen for example, is if a different person from within the intended participant’s household answers a telephone survey in their stead, without the knowledge or authorisation of the researchers, or by answering an online survey that was sent to someone else. Without Back Checking Data, such a mistake could go unnoticed and therefore lead to inaccurate research.
On a survey, one of the final questions is often asking the participant whether they would be willing to be re-contacted by the researcher at a later date. This can be used for the purposes of Back-Checking Data. |
Most closings take between 30 and 60 days. It’s a broad range, but that’s because there are a lot of moving parts. Many factors are tough to predict. For example, if you’re in a hot housing market, lenders, appraisers, and home inspectors can get backed up and cause delays. When you reach your official closing date, you can expect to spend one to two hours signing all the final paperwork.
During negotiations, a buyer and seller must decide who pays closing costs. In many cases, each side will pitch in. On average, the total cost is roughly two to five percent of the home’s sale price. Below are some of the most typical closing costs you can expect to pay.
• Loan origination fee. This goes to the lender and is usually one percent of the loan amount.
• Appraisal fee. This fee covers a professional appraiser evaluating your home and determining the fair market value for the lender. The cost is usually between $300 and $500.
• Credit report fee. The lender will charge a fee (typically less than $100) to order the credit report from a third party.
• Closing or attorney fee. The attorney or a title company receives this for their services. The amount depends on how involved they are during the closing process. Expect to pay somewhere between $300 to $1,000.
• Document preparation fee. This is another fee charged by either the attorney or the title company as part of its charge for handling the closing on your loan.
• Recording fees. Your mortgage documents must be filed at the county courthouse. The fees to do this vary by location.
• Tax service fee. This often amounts to less than $100, paid to a third-party service that makes sure all previous property taxes have been paid.
• State, county or municipal taxes and stamps. Many jurisdictions impose taxes on new mortgages, based on a percentage of the loan amount and are collected at closing.
• Title search. This is performed by a title company to determine if there are any unrecorded liens against the property. The fee is usually in the neighborhood of $300.
• Title insurance mortgage policy. Lenders require this to protect against any liens that haven’t been discovered. Policies typically cost a few hundred dollars.
As a buyer, you’ll need to set up an escrow account for home closing costs like taxes and insurance. In most areas, at least three months of property taxes must be collected. You’ll also need to pay for one year of homeowner’s insurance and two month’s-worth of premiums.
What Are the Steps to Close?
There are several steps you’ll need to take to protect your interests and ensure a smooth closing. Some of them may take some time for your lender, insurance company or a third-party to complete. The good news is that in most cases, all you need to do is complete a form or make a phone call to get the ball rolling.
Escrow means that an independent third party will hold on to money, documents and any other items that need to change hands during a transaction. After the closing is complete, the third party will make sure everyone gets what they’re due at the same time. Anyone involved in the transaction can open an escrow account – the home buyer, home seller, real estate agent or lender. Who do you open escrow with? There are several options, with a bank, attorney or title company being the most common. In many cases, the buyer’s real estate agent will open escrow to put in the buyer’s earnest money deposit, which should be established in the purchase agreement.
So long as you included an inspection clause in your purchase agreement, now is the time to hire a qualified inspector to do a walkthrough of the home. The inspector will provide a detailed report of the home’s structure, plumbing, electrical system and overall condition. You’ll have the opportunity to negotiate repair costs with the seller. Without question, the seller should be held accountable for addressing costly problems like a faulty foundation or collapsing roof. However, keep in mind that no home is perfect. It’s best not to jeopardize the purchase of your new home over a dispute to fix a cracked tile or loose fixture. For more on the inspection process, check out the 5 Ins and Outs of a House Inspection.
If you’re getting a home loan from a mortgage lender, it’s important to contact them after signing the purchase agreement so they can order an appraisal. This involves having a certified appraiser evaluate the home and calculate the home’s fair market value. The mortgage lender will review the appraisal results and decide if the loan amount is acceptable considering the home’s value. If the appraisal comes in much lower than the sale price, you’ll need to renegotiate with the seller to cover the difference. Otherwise, the lender may cancel the loan.
Even if you already received a preapproval letter from your lender, you must make a formal mortgage application to finalize your loan. Most lenders use the same standardized form, which can often be completed online. Submitting this form kickstarts the processing and underwriting of your loan. The underwriter will make sure the property and you, the borrower, meet all eligibility requirements for a specific loan. This is when a loan is either marked “clear to close” or rejected because of a serious issue. If you get the green light, you should receive a Closing Disclosure that includes all the finalized details of the loan along with all the documents you’ll need to sign on your closing day.
Your lender will likely require you to buy title insurance for several reasons. One is to ensure there are no claims made to the ownership of your home. If the homeowner – or previous homeowners – did not have clear ownership of the home, then your ownership may be at risk. Another reason is that previous owners may have had any tax liens against them, which means you could be on the hook to pay them.
Another lender requirement is getting homeowner’s insurance that protects both your interests and the lender’s. At a minimum, most lenders require a policy to cover the cost of rebuilding the home and replacing its main components. You can add coverage for other potential hazards like flooding if needed.
The documents you need to have on your closing day will depend on several factors. These include the type of property you’re buying, the terms of your purchase agreement, the type of home loan you’re receiving, and other requirements specific to your area. In most situations, your real estate agent, escrow company or attorney will supply much of the necessary paperwork.
Assuming you have a mortgage lender, below are some of the most common items you should bring to your closing.
• A Photo ID. In order for your signature to be notarized on several loan and title documents, you’ll need to have state-issued identification like your driver’s license.
• A homeowner’s insurance certificate. A closing agent will need to see proof that your insurance is going into effect on the day of your closing. • A copy of the purchase agreement. It can be tough to keep all the details of your agreement straight. Having this for reference can be extremely helpful as you look everything over.
• A cashier’s or certified check. Your lender should provide you with specific instructions related to paying closing costs. If you haven’t rolled these costs into your mortgage payments and the seller isn’t covering all the costs, you’ll need to bring a guaranteed form of payment.
Your final walkthrough of the home should happen a few days (or even a few hours) before you sign the final paperwork. It isn’t meant to be an in-depth inspection. The purpose is to ensure that no serious problems have surfaced, and that agreed-upon repairs have been made. Turn on the lights, run the faucets, flush the toilets and examine each room. If you discover any major issues, you still have time to request compensation from the seller. Assuming the home checks out, you’re at the finish line. Most closings take place at the title company or mortgage company’s office, where you can expect to see the key players involved in the transaction. These can include the closing agent, real estate agents, attorneys, title company representative, loan officer and home seller. A giant stack of documents will need to be signed, payments will be delivered, and, if everything is in order, you’ll get the keys to your new home! |
Surgeons have carried out the first operations in Britain using a pioneering “bionic eye” that could in future help to restore blind people’s sight.
Two successful operations to implant the device into the eyes of two blind patients have been conducted at Moorfields Eye Hospital in London.
The device — the first of its kind — incorporates a video camera and transmitter mounted on a pair of glasses. This is linked to an artificial retina, which transmits moving images along the optic nerve to the brain and enables the patient to discriminate rudimentary images of motion, light and dark.
Just like astronaut Steve Austin. A man barely alive. |
Accidents happen! Next time you'll wear your mouthguard when you skateboard, never use your teeth to open anything again, and carefully step away from your grandmother's hard candy dish. But now that your tooth has chipped, what's the next step in repair? Take a look at many of the common treatment options for repairing chipped teeth.
BONDING. If the chip is small, then a tooth-colored resin is applied to the damaged area with adhesive, molded to shape, and then hardened with a curing light in a single visit.
PORCELAIN VENEER. If the chip is too large for bonding, then a thin shell of porcelain can be custom fabricated by the lab from an impression of your tooth and then adhered with a bonding procedure at a later visit.
CROWN. Larger chips or fractures are commonly treated with a crown also known as a "cap." Impressions of the tooth are made after a preparation and then sent to the lab for a custom fitted crown to be fabricated. A temporary crown will be placed over the tooth until the final visit when the custom crown has been received back from the lab and then adhered to your tooth at the second visit.
No matter what size of the chip, it is important to have your tooth evaluated to determine what treatment is needed. If you have any questions or are in need of an appointment, call our dental office at 918-455-0123!
Blood, saliva and even breath may one day be able to diagnose lung cancer, which is the number 1 cancer killer in the U.S. A primary reason why is that lung cancer is often detected at later stages than some other cancers. Lung cancer currently does not have a widespread and easy to implement screening test available compared with other cancers - think of the annual or biannual mammogram for breast cancer, routine pap smear for cervical cancer or colonoscopies for colon cancer.
New research is currently looking into developing an early screening method for lung cancer. Researchers are investigating whether body fluids other than blood may provide insight into diagnosing lung cancer at an earlier stage. The salivary diagnostics lab at UCLA School of Dentistry are analyzing molecules in saliva, including DNA, RNA, proteins, metabolites and microbiota to determine whether these elements hold clues as to the individual's cancer status. Unlike the current lung biopsy, salivary diagnostics are a non-invasive, easy to use tool for patient specimen collection as well. Although more research is needed, these advances in the early diagnosis of lung cancer appear to be very promising. |
The term “smart” home refers to a living space that contains remotely controlled or preprogrammed “smart” devices. These devices can help a space function more efficiently and give occupants more direct control over their environment. Many “smart” devices have been introduced in recent years, and this trend will undoubtedly continue. A simple example is a coffee maker equipped with a timer. You simply fill the machine with water and coffee grounds the night before and set the timer, and your coffee is waiting for you when you get up.
Smart technology for home automation systems has come a long way. The idea is to link all of the devices in the home together as much as possible and provide centralized computerized control of the interior and exterior environments and the home security system. You can access and control many of these systems via the Internet even when you’re not at home. The system can monitor individual rooms and turn off lights when a room is empty, or indicate when a malfunction has occurred or routine maintenance is required on a system. Smart devices can help lower energy costs and can increase the energy and the efficiency of your home.
New generation thermostats now keep track of multiple settings. You can cut back cooling or heating during the week while you’re at work and school, but maintain comfortable temperatures in the evening and on the weekend without having to constantly adjust the thermostat. Next generation thermostats include touchscreen technology to monitor room by room temperatures to channel your HVAC unit’s efforts efficiently into specific areas. This further impacts your energy bill by eliminating the wasted heating and cooling efforts of frequently unoccupied spaces in your home.
Small motorized devices incorporated into your window treatments can be programmed to open or close blinds and drapes, depending on the season or time of day. Windows can be opened or closed automatically for air circulation. Ventilation fans can be programmed to draw hot air out of the attic or turned off to conserve heat, depending on the season.
Small refrigerators equipped with built-in Internet terminals are already on the market. These refrigerators can suggest recipes based on an inventory of currently stored food, keep track of expiration dates, create shopping lists as items are used, and maintain a calendar of appointments and important dates (to replace that calendar we all keep on the fridge).
In the bath, smart shower heads can store water temperature and pressure settings for each individual. Toilets can be equipped with self-clean settings.
We’ve all wrestled with handheld remote control units for audio and video entertainment components. Smart technology lets you combine all of the remotes into one touch-screen controller that controls channel selection, recording, programming, and even room lighting and temperature. Home television sets can provide PC-like Internet capability, convenient home shopping, and interactive capability. As smart technology continues to improve, more and more household tasks can be automated, giving us more free time and making life a bit easier. Consider ways to make your home a “smart” home. |
Children arriving at US-Mexico border can find angry, armed protesters who even seek to deny them water, writes Laura Waddell.
On the globally resonant subject of mass migration, Lost Children Archive is the latest novel by Valeria Luiselli, documenting a family road trip across America contrasted with journeys of migrant children like those of her non-fiction book Tell Me How It Ends, reflecting the writer’s experience as volunteer translator for unaccompanied children reaching the US-Mexico border.
Questions asked of migrant children after arduous travel build their case to stay; if insufficient, they’re deported, accommodated meanwhile in ICE detention centres. Human rights organisations warn of conditions behind the walls we are barely able to picture. Hundreds of children are reported to have gone missing.
The situation is grim. The process depends on volunteers and charities providing legal and other services, stretched too thin to accommodate demand. The fast-track system set up during the Obama administration, on the surface meeting needs of children quickly, narrows the window to secure legal help in practice. Voluntary translators such as Luiselli not only transcribe but communicate with children answering box-ticking questions with fear, non-linear answers, and lack of comprehension. Their journeys have had confusing beginnings, middles, and ends.
Some answers, difficult to hear, may bolster a case for clemency. Tell Me How It Ends shares the statistic 80 per cent of women and girls crossing Mexico are raped. Some take birth control as a precaution. Other migrants are abducted. It’s a world away from children asking “are we there yet?” on holiday.
Walking on the hot land of states bordering Mexico, migrants hope to be picked up by Border Patrol before vigilantes. When increased numbers of unaccompanied children were reported in the news, adults drove for miles to protest outside immigration centres. Some had deckchairs, holding handmade signs with slogans such as “Return to Sender”. Others practised ‘open carry’ of firearms. Volunteers leave water for children who cross the border; others kick it over. Some children, such as a boy named Manu, find life in their adopted country has similar problems to that left behind, like persecution by gangs. A migrant herself, from Mexico, Luiselli’s Green Card application asked “Do you intend to practise polyamory?” and “Are you a member of the Communist Party?” Bureaucracy, both sinister and tedious, and so irrelevant to a whole, lived life.
Other artists and writers try to humanise what’s often depicted as a logistics and resources problem. Indie magazine Nansen profiles one individual migrant or refugee per issue, launching with the cheerful face of campaigner Aydin Akin, born in Turkey, who is ineligible to vote despite 50 years residence. Photographer Sergey Ponomarev documents boats reaching shore or packed trains, close enough to see fatigue upon faces. I stood for a long time before his photographs on display in Ireland’s Gallery of Photography recently, unable to look away from the direct gaze of people waiting in a long registration line, herded by Slovenian police. The only frame of reference I have is petty frustrations of timetabled travel, assured of the destination printed on my ticket.
Brexit saw a decline in discussion of immigration in the UK. Farage’s infamous ‘Breaking Point’ poster was reported for inciting racial hatred. Trump’s wall has erected barriers before becoming physical reality, designed to stoke paranoia. Large numbers in news reports are unfathomable and abstract. Images of the little Syrian boy Aylan Kurdi washed ashore in his red jacket, and the shock that accompanied it, feels like a long time ago.
Luiselli’s writing doesn’t avoid the big picture, of drugs and the arms industry, and governmental complicity in a multi-continent humanitarian problem. But telling us of few fraught young worlds successfully breaks through cultural numbness. Mass processing of human lives is inherently surreal; cruelty is by design. There is some hope. In Tell Me How It Ends, Luiselli describes teaching high school students about immigration who are inspired to voluntary actions. Manu, the child dismayed to be at the mercy of gangs once again, turns up to a football match they arranged. They make him captain.
While Luiselli is emerging as one of our greatest living writers, books and reporting don’t have all the answers, if any.
But asking better questions, about how migration can be addressed humanely, and asking ourselves what we really know about what’s happening, is something. |
This animation shows the rapid passage of cocaine through the brain. It demonstrates that the intensity of the cocaine “high” parallels the trajectory of cocaine levels in the brain.
Four high school students were honored for their work regarding e-cigarettes, the GABAA neuroreceptor, and adolescent multitasking.
The Scientific Director of NIDA’s Intramural Research Program talks about switching off animals’ compulsive cocaine seeking by optogenetically activating the prefrontal cortex, and the implications of this work for people. In an accompanying podcast, Dr. Bonci walks viewers through experiments that showed that prefrontal cortex activity levels may constitute a simple switch controlling whether or not animals compulsively seek cocaine. |
Nurse practitioner seems like an ideal job.
It is easy to find a meaningful purpose in what you do, an average salary for this position exceeds $100,000 annually, and most people will do this work becasue that is exactly what they want to do with their life.
But things are never only black or white.
Job of a nurse practitioner presents a lot of challenges. In an interview for this position, you can expect to deal with difficult behavioral questions, questions that will examine your attitude to these challenges.
What is more, hiring managers will add some personal and technical questions to the mix. Your answers to these questions help them to evaluate your motivation, goals, and your readiness for the job.
Why did you decide for a career of a nurse practitioner?
What would you if a patient asked you for antibiotics, and you knew that they did not need them?
Imagine that a parent of a child complained about the healthcare the child receives in our hospital. What would you do?
Try to approach this question from a perspective of patients, and local community. That means, you should say why you believe you can do a good job of an NP, you should talk about the value you can bring to your employer, and to people you will care for.
For example, you can say that you have all the strengths and skills that make from you a good doctor and companion, that your strong sense for responsibility and service makes from you an ideal person for this role.
Alternatively you can say that you follow an example of someone–a person who motivated you to pursue career in healthcare. It can be a doctor from your family, it can be someone who helped you to overcome a difficult period in your life (health-wise) and helped you to build this dream of working as a nurse practitioner.
You have obviously a plethora of options. You can work in a hospital, nursing home, health-center, you can even start your own practice. So why have you chosen this exact job, this exact place of work?
Typically, our reasons are rather practical. We choose a place of work that is close to our apartment or house, a place we can reach easily. Or we opt for a vacancy that offers the highest salary, or great employee benefits.
Mentioning any of that will be okay in your interview, but it won’t be the best possible answer.
In a best possible answer, you should praise the employer for something–and state this as a primary reason for your choice. What can it be?
High quality staff, or equipment they have in place.
Great reputation of the facility.
Their approach to innovation and medical work that stands out.
Great locality and environment for either the patients, or the healthcare workers, or for both groups.
Anything else that differentiates them from their competition.
Remember that everyone likes honest compliments. Find something you can praise them for, and talk about it in your interview.
In most countries, you have to work for six to twelve months (minimum) before you can apply for a nurse practitioner certificate (besides all other things you have to complete, such as your bachelor and masters degree).
Bearing this in mind, once you apply for a job of an NP, you will already have some experience under your belt. You will always have something to talk about. But what should you talk about?
The rule is simple: stay positive and enthusiastic, and show the employer that you know what it takes to work as an NP.
You should explain where you worked, for how long, what your principal duties were, and how this all prepared you for a job of a nurse practitioner.
What do you consider your biggest weakness when we talk about nursing work?
Tell us something about your certification and studies.
If you shall describe yourself by three words only, which words will you pick?
Just like in any other interaction of two human beings, we experience conflicts in nursing work.
The key is to show the hiring managers that you know this can happen, that you experienced some conflicts, and have a full understanding (both rational and emotional) for the experience of the patient.
Talk about the conflict in a calm and cheerful way. Show us that you did your best to serve the patient, that you always try your very best.
Alas, it’s impossible to make everyone happy with your service. You should be aware of this, and the bad words you hear should have no impact on the way you treat the particular patient, or the next one you meet while practicing your work.
A good answer to this question really depends on your philosophy, and attitude to curing people. In my opinion, a physician, or a nurse practitioner, should never prescribe antibiotics unless they are absolutely necessary.
But we see this happening just too often–doctors listening to the patients. In the era of free information (WWW), people often know what they want to get before they even enter the medical practice.
What they do not know though is that pharmaceutical companies pay people to promote their products on forums, and other other websites where people discuss their medical problems, and ask for an advice.
Internet is full of such claims, but less than 1% of them is authentic. You should be aware of this as an NP, and suggest that you’d never prescribe anything to anyone just because they ask for it.
When our child, parent, or friend ends up in a hospital bed, we wish only the best for them. The relatives of the patients will sometimes complain, and we have to understand them. Sometimes they struggle to cope with the situation more than the patients do.
Describe a time when you experienced a conflict of your personal and professioonal interests. How did you get over it?
You won’t compete with many other job seekers in your interview, but you will still have to demonstrate your readiness for the job, and right attitude to work, patients, and healthcare in general.
Practice makes perfect. Go through the questions once again, and try to come up with a good answer.
And if you are not sure how to deal with the questions, or feel anxious, have a look at our Interview Success Package – it will make your life much easier in your interview. |
Kibale National Park contains one of the loveliest and most varied tracts of tropical forest in Uganda. Forest cover, interspersed with patches of grassland and swamp, dominates the northern and central parts of the park on an elevated plateau. The park is home to a total of 70 mammal species, most famously 13 species of primate including the chimpanzee. It also contains over 375 species of birds. Kibale adjoins Queen Elizabeth National park to the south to create a 180km-long corridor for wildlife between Ishasha, the remote southern sector of Queen Elizabeth National Park, and Sebitoli in the north of Kibale National Park. The Kibale – Fort Portal area is one of Uganda’s most rewarding destinations to explore. The park lies close to the tranquil Ndali-Kasenda crater area and within half a day’s drive of the Queen Elizabeth, Rwenzori Mountains and Semuliki National Parks, as well as the Toro-Semliki Wildlife Reserve.
Kibale Forest’s secondary tourism centre in the north of the forest offers guided forest walks and a chance to encounter primates such as red colobus, black-and-white colobus, blue monkeys and vervet monkeys. Visitors may also spot a variety of aquatic, forest and savannah birds and enjoy views of the Mpanga River.
The diversity and density of primates in Kibale Forest is the highest in Africa. The most famous of its 13 species is the chimpanzee, our closest relative. Kibale Forest’s 1450 chimpanzees represent Uganda’s largest population of this endangered primate. The forest is also home to East Africa’s largest population of the threatened red colobus and the rare I’Hoest’s monkey. Other primates include the black-and-white colobus, red-tailed and blue monkeys, grey-cheeked mangabey, olive baboon, bush baby and potto.
At least 70 mammal species are present in the park though ground-dwelling animals are difficult to see in dense forest. An estimated 500elephants are present, along with buffalos, leopards, warthogs, bush pigs, golden cats and duikers. A keen observer may spot reptiles and amphibians as well as a colorful variety of 250 species of butterflies.
The park boasts more than 375 species of birds. Kibal Forest specials include the African Pitta, Green-breasted Pitta, Afep Pigeon, White-naped Pigeon, Crowned Eagle, Red-chested Owlet, Black Bee-eater, Western Nicator, Yellow-rumped Tinkerbird, Little Greenbul, Brown-chested Alethe, Blue-breasted Kingfisher, African Grey Parrot, Scaly-breasted Illadopsis ,Brown Illadopsis, Black-capped Apalis, Blue-headed Sunbird, Collared Apalis, Dusky Crimsonwing, Purple-breasted Sunbird, Red-faced Woodland Warbler, Yellow Spotted Nicator, Little Green Bul, Black-eared Ground Thrush and the Abyssinian Ground-thrush.
The diversity and density of primates in Kibale is the highest in Africa. The most famous of its 13 species is the chimpanzee, our closest relative. Kibale’s 1450 chimpanzees represent Uganda’s largest population of this endangered primate. The forest is also home to East Africa’s largest population of the threatened red colobus and the rare I’Hoest’s monkey. Other primates include the black-and-white colobus, red-tailed and blue monkeys, grey-cheeked mangabey, olive baboon, bush baby and potto.
When chimpanzees and other forest residents rest up at dusk, a nighttime shift of rarely seen creatures becomes active. Night walks through the darkened forest use powerful torches to seek nocturnal creatures such as the potto, bushbaby, nightjar, cricket and tree hyrax, with its chilling shriek, as well as the occasional civet or serval cat. Night walks leave the camp at 7.30pm and last between one and a half and two hours. |
Brunonia australis is the only member of its family (Brunoniaceae) in Tasmania. Flowering of this perennial herb occurs from mid November to late January. Most herbarium specimens and observations are from November to early January. Flowers are required for identification though, if familiar with the species, it can be distinguished by its foliage at other times of the year.
In Tasmania, the species typically occurs in grassy woodlands and dry sclerophyll forests dominated by black peppermint (Eucalyptus amygdalina) or less commonly white gum (Eucalyptus viminalis) or stringybark (Eucalyptus obliqua). Some smaller populations are found in heathy and shrubby dry forests. The species occurs on well-drained flats and gentle slopes with elevations of between 10 and 350 metres. It is most commonly found on sandy and gravelly alluvial soils with a particular preference for ironstone gravels. Populations found on dolerite are usually small. |
= is used to assign values.
+ is used to add values.
The arithmetic operator + is used to add values together.
The value of x, after the execution of the statements above is 7.
Arithmetic operators are used to perform arithmetic between variables and/or values.
The + operator can also be used to add string variables or text values together.
To add two or more string variables together, use the + operator.
After the execution of the statements above, the variable txt3 contains "A Good DayTo start with".
"A Good Day To start with"
The rule is: If you add a number and a string, the result will be a string! |
In 1102, Raymond VI of Saint Gilles, Count of Toulouse, one of the first knights who set out on the First Crusade in 1096, turned his attention to the conquest of Tripoli, the most important emirate on the coast. Raymond wished to establish a principality that would command both the coast road and the Orontes. In 1103 Saint-Gilles who had camped on the outskirts of the city, ordered the construction of a fortress which to this day is still known by his name.
The well preserved ‘Qal’at Saint-Gilles’ is still visible in the twentieth century, in the centre of the modern city of Tripoli. At the time of the arrival of the Crusaders, however, the city extended no further than the Mina’ quarter, the port, which lay at the end of a peninsula access to which was controlled by this famous fortress.
This fortress was the first ever of its kind. No caravan could reach or leave Tripoli without being intercepted by Saint-Gilles’s men.
During the Crusade period, Tripoli witnessed the growth of the inland settlement surrounding the “Pilgrim’s Mountain” (the citadel) into a built-up suburb including the main religious monuments of the city such as: The “Church of the Holy Sepulchre of Pilgrim’s Mountain”, the Church of Saint Mary’s of the Tower, and the Carmelite Church. The state was a major base of operations for the military order of the Knights Hospitaller, who occupied the famous castle Krak Des Chevaliers.
In 1289, when the Mamluk occupied the city, the Mont Pèlerin quarter was set ablaze, the castle of Saint-Gilles suffered from the holocaust and stood abandoned on the hilltop for the next eighteen years. But, in 1308, The Mamluk Governor Essendemir Kurgi, decided to restore and rebuild StGilles Castle on the hill, so he incorporated what he could in his citadel, and made use of Roman column shafts and other building material he found nearby.
Many of the interior walls, ramps and terraces of the citadel seen today were built in his time. In the years that followed, various Ottoman governors of Tripoli, specially Barbar Agha, did restoration work on the citadel to suit their needs and with time the medieval crenellated battlements were destroyed in order to open ports for cannons. Very little of the original Crusader structure has survived until this day. The graves of a number of nameless Frankish knights, here and there, are the only bits of evidence today evocative of their presence on the heights of Tripoli’s “Pilgrim’s Mountain” many centuries ago. |
The Golem is a legendary creature which has been constructed from an inanimate substance, such as clay or soil, and brought to life by magic. It is normally the same size as a man, and exists to serve whomever has given it life. The name Golem is of Hebrew origin, meaning unshaped or unfinished, and it is in Jewish folklore that the origin of the Golem legend can be found.
Two passages from religious scripture hint at this origin. The first of these is from the Book of Psalms (139:16), where it is written: “Your eyes could see me as an embryo (golem), but in your book all my days were already written; my days had been shaped before any of them existed.” Hebrew to English translations vary and in the preceding case embryo was used to communicate the sense of unfinished or incomplete.
This passage goes on to detail how two other rabbis, Haninah and Oshea, were able to create a calf on the eve of every Sabbath which was then eaten by them on the Sabbath. They created these calves using the white magic of an ancient Jewish text, the Sefer Yetzirah (Book of Creation). It was on this basis that the legend of the Golem took wing.
During the Middle Ages, Western European Jews avidly studied the Sefer Yetzirah in the hopes of creating their own Golem. Different recipes for doing this were the result of these studies. One such method was to mould a man-like figure from soil and perform a dancing ritual around it, chanting God’s secret name. Defeating such a creature entailed doing the ritual in the opposite way and chanting the name backwards.
Another technique was to write this name on a screed and place it in the mouth or on the arm of the creature. Removing the screed would defeat it. Alternatively, you could write the name, or the word ‘truth’ in Hebrew (emet) on the figure’s forehead to bring it to life, and erase the name to ‘kill’ it. Erasing the first ‘e’ in emet gives you the Hebrew word ‘met’, meaning death, and doing this would be sufficient to do away with a Golem.
Grimm went on to state that, because this servant was unable to leave the house in which he worked, he would start to gain weight which made him larger and stronger than his masters, and this bred fear in these masters, who would erase the emet before the Golem became too powerful and thus reduce him to the clay and mud that he once was. This depiction has influenced the popular depiction of the Golem right up to the present day.
While no concrete case exists of a Golem actually having been created, there is one famous account from the 17th Century by a Polish Kabbalist attributing such a deed to Rabbi Elijah Ba’al Shem of Chelm (1550-1583). This creature, popularly known as the Golem of Chelm (a city in eastern Poland), was to have become so monstrous as to have threatened the existence of the entire world, so Rabbi Elijah destroyed the creature at the cost of his own life.
Another famous story (entirely fictitious) tells of a Golem that was under the power of Judah Loew ben Bezalel (1520-1609), a prominent Talmudic scholar of Bohemia and Moravia. He reputedly created a Golem for the protection of the Jews of the Prague ghetto from anti-Semitic violence, but this Golem had one weakness: it could not be alive on the Sabbath as this would desecrate Judaism. So Rabbi Loew would deactivate the Golem by erasing the “emet” on its forehead during the Sabbath, and reactivate it after the Sabbath had passed. The source for this was an 1834 work by Josef Kohn titled Der Judische Gil.
One of the characteristics that make a Golem a problematic servant in those works of literature that harness the legend is his rigid obedience to order. He interprets his instructions too literally and carries them out too pedantically, often frustrating the intentions of his master as a result. This can be seen in literary works that have adapted the legend in other forms, particularly in Mary Shelley’s Frankenstein (1818).
The Golem legend has been an integral part of many other literary and artistic creations since Shelley’s masterpiece was published. Both Isaac Bashevis Singer (1902-1991) and Elie Wiesel (1928-) have written directly about the creature, and Jorge Luis Borges (1899-1986) wrote a poem inspired by the Rabbi Loew fable. Popular television shows such as the X-Files, the Simpsons and the Sopranos have all dedicated episodes to the Golem in some form or fashion.
One of the most influential artistic influences that the Golem legend has had is on the work of the Czech writer Karel Capek (1890-1938), whose play R.U.R. (1920, with the acronym standing for Rossum’s Universal Robots) introduced the concept of the robot to science fiction. The robots in Capek’s play are not the metallic automatons that we associate with the word, but are biological beings that are nonetheless assembled rather than born. While Capek himself disavowed any influence that the Golem legend had on his robots, the similarities between the two concepts are all too apparent. |
Mugwort Dosage: Follow doctors instructions on how to use this herb.
Mugwort Precautions: The effective ingredient is the volatile oil and it is the toxic ingredient which can cause inflammation of the skin mucus. Taken orally can cause irritation of the of the digestive tract. Do not use if pregnant or nursing.
Mugwort has a long and rich history of use in medicine as an herbal remedy. Traditionally Mugwort stops bleeding, warms the meridians and dispels cold. Mugwort may help ease joint pain and promote a positive moodI. It is a great source of vitamin C, calcium and zinc. Mugwort is also sometimes referred to as St. John's Plant because of the tradition of gathering the herb on St. John's Eve. It is said to enhance dreams when placed under your pillow at night. |
Eliminating the most widespread form of malaria will only be possible if public health programmes treat people with parasites lying dormant in their liver, a study suggests.
More than 2.5 billion people, mostly in Asia, the Horn of Africa and Latin America, live at risk of malaria caused by the Plasmodium vivax parasite. Not only is this strain of malaria widespread, it is also persistent: when it enters a victim’s liver, it can remain dormant without causing any symptoms for years after the initial infection, says the study in PLOS Medicine.
The study found that four-fifths of cases of P. vivax malaria in Papua New Guinean children occur when the parasite re-emerges and causes a relapse. There is currently no way to identify people with the dormant ‘hypnozoite’ form of the parasite, researchers warn.
“Unless we somehow find a way to attack the reservoir of hypnozoites in the liver, it will be near impossible to eliminate vivax malaria,” says Leanne Robinson, lead author of the study and researcher at the Papua New Guinea Institute of Medical Research.
To assess what proportion of malaria infections occur due to relapses, researchers treated one group of children with drugs that kill parasites in both the blood and liver, while giving another group a placebo and a drug that attacks the blood stage of the parasite.
Based on the results, the researchers developed transmission models to assess whether giving malaria drugs to entire communities would be effective at preventing recurrent P. vivax infections.
They also modelled the potential impact of tafenoquine — a promising relapse-phase drug that is still in clinical trials.
“Our mathematical models showed that the only approach currently available to really reduce vivax prevalence [in Papua New Guinea] is mass administration of both a blood and liver stage drug,” Robinson says.
Regina Rabinovich, director of the Malaria Elimination Initiative at the Barcelona Institute for Global Health, says she is “excited about these results”.
Robinson and her colleagues plan to do similar studies in other Plasmodium vivax endemic countries.
“There are different strains of Plasmodium vivax all over the world and those strains have different patterns of relapse, so that would obviously change the scenario compared to what we’ve seen in Papua New Guinea,” she says.
One country in their sights — Guatemala — had more than 4,800 cases of malaria in 2014 alone, the study says.
But despite efforts to scale up prevention and control measures such as providing more mosquito nets and microscopy centres in high-risk areas in the country, “we’ve not seen a significant reduction of cases since 2012”, says Norma Padilla, the head of entomology and ecology of parasitic diseases at the Universidad del Valle in Guatemala.
“This study provides valuable information to accelerate elimination in areas like Guatemala, where conventional approaches don’t seem to be reducing cases as expected,” she says. |
2017 is nearly over and celebrations for Christmas are about all but a few days away. It’s always nice to head into a new year with inspiration and drive. One of my greatest sources of inspiration is the success of others. Interviews are one of my favourite things to do on this website because both you, and I, get some amazing takeaways to apply to our lives and our training.
A few months back I interviewed Alex Lorenz of Calisthenic Movement and promised an interview with Alex’s business partner, and the other half of Calisthenic Movement, Sven. Well, today is your lucky day because I bring you a full interview with Sven Kohl.
We talk about starting a YouTube channel, what order you should learn calisthenic movements in, keeping your body fat low, improving mobility, handling injuries, goals for the future, optimal methods for achieving advanced bodyweight moves and where Calisthenic Movement is heading over the next few years.
You don’t have to be a calisthenics connoisseur to benefit from Sven’s wisdom, either. Sit back and enjoy our chat!
1. Everyone knows you from Calisthenic Movement, but what was the life of Sven like before CaliMove? Could you tell us about yourself and your background?
Sven: My Grandfather was a sportsman and a sports teacher so I got into physical training pretty early. In my teen years I did sports like wrestling and boxing. At the age of 18 I started with weight & bodyweight training in the gym. I did the typical “gym stuff” like deadlifts, squats, benchpress, but also bodyweight exercises like pull ups and dips.
In my 20’s I decided to become a physical therapist. I successfully finished a 3 year education and worked as a physical therapist for about 2 years, but it wasn’t my dream to work for someone else. The education itself was great and it helped me a lot in my future career as a trainer. I quit my job and started Calisthenic Movement.
2. What made you fall in love with calisthenics training? I know you had a gym background (weightlifting) before you started calisthenics.
Sven: To be honest, the typical gym bored me. I did over 8 years and it wanted to do something different. Calisthenics was perfect for me. It is a mix between strength and skill training and you can also build a good physique with it.
3. Was making a fitness YouTube channel always in your sights, or was there a pivotal point where you became especially motivated to do it?
Sven: I started the channel only 3 months after I started with Calisthenics. Back in the day it was a very little community and everyone in it made videos to show their progress or to show his “team” to the world. In the beginning it was strange for me.This was before instagram at this time it wasn’t normal that everyone shared parts of their life with others online so I was not used to that. It was very strange to film myself and load it up.
4. From seeing old footage of you, it seems you had a good foundation when you switched from weights to purely bodyweight. What strength level did you have from your days in the gym, when you started looking to learn the more advanced moves like muscle ups, levers and handstands? For example, 10 pull ups, 20 dips and a 10 second L-sit…….
Sven: I could do around 15 Pull ups, 25 Dips and was able to do a L Sit for 19 seconds. That’s it. No levers, not Handstand, no Muscle Ups.
5. I watched your ‘Evolution Of Workout’ video and you made INCREDIBLE progress from early 2012 to mid 2013! What would you attribute your success most to? Was it consistency, a certain type of training or diet that helped you make such good gains?
Sven: These were my “newbie gains” in combination with a good program. As I already told you I’ve had a certain amount of basic strength at the beginning.
The main problem was not the strength but the technique. For some movements (planche, handstand, levers) you need a good body perception & coordination.
So I had to get used to that. After nearly 6 years I’m still making progess when it comes to the technical approach.
6. Nobody can see pictures or videos of you without commenting on how ‘shredded’ you are. What do you find helps you maintain such a low bodyfat level all year? Do you follow a certain eating style, do cardio or just naturally have a fast metabolism? Any tips for people looking to get to single digit body fat and stay there?
Sven: I think it’s a good metabolism in combination with a good diet. I’m more of a skinny type, gaining muscle mass was really hard for me, that’s the downside. To lower my bodyfat I do the 8/16 intermittent fasting and vary my calorie intake every few days.
7. You have mastered some very advanced moves and seem to naturally be good at pulling movements – like front lever and one arm pull ups. Did you find these moves came easier to you than others, and were there any moves that you’ve found very difficult to master and make progress on?
Sven: Haha that’s really funny because pulling movements are my weak point. In fact it’s easier for me to do Handstand Push Ups, Planche and Handstand Presses instead of Levers and OAP.
Frontlever is really hard for me. I have to focus on it to maintain my level when it comes to this exercise.
Could you tell us what your current bests are for those moves and which ones you might not have achieved?
Sven: In some movements I’ve improved. The Handstand for example. This is a never ending story, I get better every year. But to be honest some of the moves like the bar muscle up I don’t do anymore (at least not regularly). When I master a movement and I can do it very clean for a couple of reps I switch to another challenging movement. I like to learn new stuff or improve movement to a point where I can do them very clean for a couple of reps, but I am not the “Sets & Reps Guy”.
9. Is there a certain style of training you like most when it comes to mastering/developing these moves – maybe things like GTG (Grease the Groove), high frequency, ultra specific training or even just a general approach?
Sven: GTG is great if you want to learn or improve a certain skill, but it is very boring and also very hard to manage.
Warm Up for only 1 set and then wait, and this around 5-10 times a day…..
At the moment I split my training into bent-arm and straight-arm training.
This is only an example. I vary my training every few weeks, I know my body very well and know when I have to do less or more and change something. For beginners or intermediates I wouldn’t recommend to do it this way. It’s better to stick to a solid program until you have the expierience and even then it’s not the best option for everybody.
10. What’s your take on training the basics? Do you still do days where all you do is basic pull ups, dips and push ups etc… for high reps? Do these still hold value even at an advanced level?
Sven: Basics are a must, even if I’m not a fan of “Sets & Reps”. You can’t do the hard moves every training. This will kill your body in the long term.
I include “easy training” sessions or implement basic exercises into my workout.
11. If someone has a ‘solid foundation’ and wants to go on to learn more intermediate moves, do you believe there’s a sensible order for achieving these moves? For example, the L-sit, muscle up, handstand and back lever are often grouped said to be the next step after the basics. Or do you feel you could just work on more advanced moves like front levers and planches if you wanted to train for them?
Sven: Handstand is a no brainer. It’s more about balance so you should start as early as possible with it. When it comes to the other movements it’s hard to say because some people are more mobile, some are stronger at push and some are stronger at pull, but in general I would say to start with L Sit, Backlever and Muscle Up.
12. What are your current goals for the future? One Arm Handstand, Iron Cross, Manna or even a mobility goal like middle splits etc?
Sven: I am 33 years old so my goal is to maintain my level and physique and don’t injure myself haha 😀 But of course I have some wishes for the future: One Arm Handstand is on my list. At the moment I also like skill combination sets. You can see some of them on our instagram account.
13. El Eggs is famous for his flexibility (and blue shorts!) But from what I’ve seen, flexibility hasn’t come as naturally to you. Have you worked on your mobility and what style of flexibility training do you like most?
Sven: I don’t like passive stretching that much, I am more of an active type (mobility). I also implement mobility training into my workouts. It’s a slow process, but in the last years I improved my hip and shoulder mobility a lot.
14. Did you find a lack of mobility to be a limiting factor at some points in your training career?
15. Calisthenics comes with a high risk of injuries. Have you had any injuries and what advice do you have for anyone struggling with an injury?
Sven: Yes, even as a physical therapist I had to deal with it. If you push your limits it can happen, but it wasn’t that bad that I had to quit training for a while. The good thing is that I know what I can do to regenerate, I can therapy myself to a certain point. In the past I had to deal with a golfers elbow and some shoulder problems. In this time I couldn’t do One Arm Pull Ups or Handstand Push Ups. Some young guys out there feel invincible until they injure themselves for the first time, it happens to most out there who train very hard to get better.
1. Start low, increase slow (El Eggs’ quote ;D) Don’t rush it. Even if you get better very quick. It’s not only about your muscles. Structures like ligaments & tendons need a lot more time to adapt.
2. If you have to deal with an injury don’t quit your training at all. Of course it depends on the injury, but in the most cases you will regenerate faster, if you move your body and work on your mobility. Do easy movements you can do without pain and increase the level slowly step by step. (Always consult your doctor first).
16. What training advice would you give yourself if you could go back to when you first started working out, either with calisthenics or generally?
Sven: Take your time young lad 😀 Don’t rush it. I know you like working out and you are really passionate about it. But progress comes faster if you don’t overdo it and give your body time to adapt.
17. What would you say is the biggest thing you’ve learnt over the years from life?
Sven: I’ve learned a lot about my strengths and weaknesses and what I want to do with my life.
18. Where do you see the future of CaliMove in a few years from now? A bigger team, more programs, more workshops??
Sven: We are working on a lot of new projects. But I can’t tell you about all of it at the moment. We will come up with more programs on a complete new platform. New videos, new design, better presentation, easier to understand etc..We will also step up our game in every other point (YouTube, Workshops, PT etc.). At the moment we’re building up our own little gym (@black_white_gym in Leipzig, Germany) this is also a great improvement for us.
It’s always fascinating chatting to great people and I’m forever amazed at how willing to talk they are. We all have busy schedules and making time can be difficult. I’d like to say a big thank you once again to Sven for chatting to me and shedding some light on what it takes to get to a very advanced level in an ever-growing community.
Something I also found very fascinating was the occasional discrepancy between Alex and Sven’s views on certain subjects; which shows wonderfully how you can have differing views within partnerships and still be a great team. |
“Those who seek to achieve things should show no mercy.”– Kautilya, Indian Philosopher, 3rd Century B.C.
I spent several years in India among the people of that country. I am well acquainted with the concept of karma and dharma. What is more important, I am familiar with it from the point of view of the culture of where they originated. Though I find that much of the Vedic texts have value, and the land, its people and culture are filled with treasures beyond our wildest imaginings, I find, however, for those outside of the belief system of Sanatana Dharma – or as it is called in the West, Hinduism, that the concept of karma becomes too often used as an excuse.
That is never more true than those who fancy themselves to be Witches….
This is a topic that is very near and dear to my heart. I have been studying Ayurveda for a few years now, and have done so because I am a very firm believer in it’s value to the world at large. The Vedic scholars of India made these contributions to the world thousands of years ago, and it is arguably the oldest form of indigenous healing on the planet. Certainly it is one of the most well documented and complex modalities in existence.
Behind the cut is an article from the Wall Street Journal. Dow Jones / Wall Street Journal is a client of mine, so I feel ok reprinting it here. All credit has been cited. At any rate, this is an issue of ever increasing importance. As Indigenous knowledge and remedies gain popularity and more marketshare, corporations, the pharmaceutical industry in particular, is trying to ‘patent’ these things that have been around for millenia. They then try to pawn them off as “inovation”, while turning around and charging the very people that they took these things from for the “privelege” of using what was their heritage to begin with. This is what has been termed ‘ biopiracy’. Please read the article, It’s pretty fascinating. |
LA JOLLA, CA – February 4, 2013 – A team led by scientists at The Scripps Research Institute (TSRI) has identified specific cellular events that appear key to lupus, a debilitating autoimmune disease that afflicts tens of millions of people worldwide. The findings suggest that blocking this pathway in lupus-triggering cells could be a potent weapon against the disease.
In the new study, described in an online Early Edition of the Proceedings of the National Academy of Sciences the week of February 4, 2013, the researchers determined that the absence of a certain type of immune cell, or of a key signaling molecule within the cell, greatly reduces the development of autoimmunity in mouse models of lupus. Mice with these protective changes showed little impairment of their normal immune functions.
“We are excited about the potential of such an inhibitor as a new kind of treatment for lupus, as well as other autoimmune conditions,” said Argyrios N. Theofilopoulos, chair of TSRI’s Department of Immunology and Microbial Science and a senior author of the new study.
While there are therapies for lupus, also known as systemic lupus erythematosus (SLE), none of these tightly targets its underlying causes. The condition appears to arise from both genetic and environmental factors, and involves complex autoimmune processes. A key feature is the activity of antibodies—“autoantibodies”—that attack the patient’s own nucleic acids (DNA, RNA) and other cellular proteins. Lupus’s signs and symptoms include rashes, joint pain, anemia and kidney damage. Untreated complications, such as kidney failure and blood clots, can be fatal. Physicians typically treat lupus with broadly immunosuppressive drugs, which raise patients’ risks for some infections and cancers.
Theofilopoulos and his laboratory have long been at the forefront of lupus research. In recent years, they and other researchers have found evidence that a powerful class of immune-stimulating chemicals, known as type I interferons, are essential to the vicious cycle of lupus autoimmunity.
The cycle apparently begins when certain immune cells mistakenly detect self-proteins and nucleic acids as “foreign” and begin pumping out type I interferons. This mobilizes other elements of the immune system, including the antibody response, and soon autoantibodies are attacking self-molecules in healthy cells. The autoantibodies in turn present these “foreign” molecules to type I interferon-producing cells, adding fuel to the autoimmune fire.
Lab-dish evidence has suggested that the key producers of type I interferons in lupus are a relatively sparse class of immune cells known as plasmacytoid dendritic cells (pDCs). In the new study, Theofilopoulos and his colleagues sought more conclusive evidence of pDCs’ role, using mouse models of lupus.
The experiments were led by first author Roberto Baccala, an associate professor in the TSRI Department of Immunology and Microbial Science who has worked with Theofilopoulos on lupus-related research for the past two decades. To help determine whether lupus can develop in the absence of pDCs, the TSRI scientists collaborated with Keiko Ozato, an expert on immune cell genetics at the National Institutes of Health. Ozato has developed a strain of mice that have no pDCs due to lack of a key gene (IRF8) needed for these cells’ development.
The team knocked out this gene in another strain of mice that normally succumbs to a lupus-like autoimmune disease with age. These mice grew up without pDCs and, as a result, were largely protected from the disease.
Next, the researchers sought to highlight specifically how pDCs promote lupus autoimmunity. For this they used a different mouse gene knockout, based on a mouse strain developed in the TSRI laboratory of Bruce Beutler, a long-time collaborator who has since moved to become the director of the Center for Genetics of Host Defense at the University of Texas Southwestern Medical Center.
Beutler’s special mice lack a working gene for a protein called SLC15A4, and as a result of this mutation, the pDCs in these mice develop normally, but are largely unable to produce type I interferons in response to the usual stimuli. Such cells normally produce large amounts of interferons after detecting viral or bacterial genetic material. For this detection, they use a class of internal receptors called TLRs (toll-like receptors). Beutler received the 2011 Nobel Prize in Physiology or Medicine for his work on TLRs. His SLC15A4-mutant mice specifically lack the ability to respond to stimuli that would normally be detected by two of these receptors, TLR7 and TLR9. These same TLRs have been implicated in lupus—they apparently mistake self-nucleic acids for viral nucleic acids.
Working with Beutler, the TSRI team applied the SLC15A4 mutation to a strain of lupus mice to see if it would protect them from autoimmunity. And it did. “The usual lupus-like signs significantly decreased, and survival was extended,” said Baccala.
“We are now trying to find pharmacologic inhibitors of type I interferon production, and in particular, inhibitors of SLC15A4,” said Theofilopoulos.
Emerging evidence indicates that TLR-based detection of self-molecules and production of interferons contribute to other autoimmune conditions, too. Thus, inhibitors of these specific immune signaling pathways might have use beyond the treatment of lupus. “We think that our findings have implications for rheumatoid arthritis, diabetes, neuroinflammatory diseases and many other diseases in which TLRs appear to play a role,” Theofilopoulos said.
Other contributors to the study, “Essential requirement for IRF8 and SLC15A4 implicates plasmacytoid dendritic cells in the pathogenesis of lupus,” were TSRI researchers Rosana Gonzalez-Quintial, Amanda L. Blasius, Ivo Rimann and Dwight H. Kono. |
MSEA and local associations have been working hard for years to collaborate with local superintendents, schools boards, and the Maryland State Department of Education (MSDE) to develop fair, transparent evaluation systems that help improve teaching and learning.
We believe that the purpose of educator evaluation systems are to strengthen the knowledge, skills, and classroom practices of educators to improve student learning. Local collaboration between school systems and educators—and local flexibility for jurisdictions to craft models that work best for their educators and students—are essential to developing fair, rigorous, and valid evaluation systems. High quality professional development is another critical element to successful evaluation systems. Support systems to hone the effectiveness of teacher practice and student learning should be present at the individual, school, and district level.
In May of 2010, the Education Reform Act (ERA) became law. In addition to providing early mentoring for teachers who may be at risk for failing to achieve tenure, the law mandated that student growth would be a “significant component” and “one of multiple measures” in a teacher’s evaluation. According to the law, no evaluation criterion could account for more than 35%. The law also mandated that evaluation models must be mutually agreed upon at the local level. Since ERA became law, locals have been hard at work revising and fine-tuning their evaluation models to reflect its provisions.
On August 24, Maryland, along with nine other states and the District of Columbia, was announced as a winner of round two of Race to the Top (RTTT). Maryland's application also touched on the issue of educator evaluations.
Ensuring that the reforms implemented through RTTT and ERA are equitable, fair, and beneficial to both students and educators is a top priority.
When the General Assembly debated the ERA, it prioritized local collaboration in developing local evaluation systems. In fact, the law required that local evaluation systems, in following general guidelines from MSDE, be mutually agreed upon by the local school system and local education association; the state was given no oversight role or authority in order to protect local autonomy and the creation of systems that made sense for local school systems.
The law asked the state to develop a default evaluation model as a model of last resort if such local collaboration could not yield agreement. However, MSDE’s actions have completely contravened the intent of the law and purpose of the default model. Instead of a model of last resort, MSDE has used the default model to bully local districts into conforming to a one-size-fits all-approach which requires 20% of the evaluation to be based on a high-stakes state test (i.e. MSA or PARCC), regardless of local agreements and the total absence of such a requirement in ERA.
Over the protests of local school systems, throughout 2012 MSDE repeatedly threatened to overturn any local evaluation system which did not include this 20% threshold. MSDE has been dictating to local districts the specific criteria to be included in teacher and principal evaluations—a power never provided to it in the ERA.
All told, such efforts flout the ERA and the good faith collaboration between local superintendents, school districts, and education associations in the development of evaluation systems that work for the local. By insisting on the default model as minimum requirements for all local models, MSDE threatens to dismantle local systems and autonomy and thumb their nose at the General Assembly by not comporting to the ERA.
Along with parents, superintendents, and school board members, MSEA led the way on several pieces of common sense legislation to improve the implementation process. These bills passed the General Assembly through a series of overwhelming and bipartisan votes during the 2014 legislative session and were signed into law by Gov. O'Malley.
Reaffirm the authority of local school districts and their bargaining units in the development and implementation of teacher and principal evaluations and guarantee that no state assessment can be used for personnel decisions through at least the 2016-17 school year. From protecting Maryland’s nationally recognized Peer Assistance and Review programs to cultivating the next great locally developed evaluation model, local autonomy is essential to Maryland’s abilities to encourage innovation and strengthen the teaching profession.
Ensures reforms are implemented in a dynamic process where local districts can make necessary adjustments and we are not locked into a one-size-fits-all timeline and model.
“These bills will help us re-establish some common sense in the implementation process,” said MSEA President Betty Weller. “Maryland’s public schools have long been a national leader and, thanks to the General Assembly, Maryland is now a national leader for how a state can come together and get these major changes right."
As Maryland continues to struggle with the implementation of Common Core and corresponding PARCC state assessments, we support a continued moratorium on the high-stakes consequences of the problematic and unproven PARCC test for students and teachers.
Maryland’s teachers need more time, support, and resources to successfully implement new evaluation systems and Common Core State Standards, according to a survey of 745 teachers. This survey reflects the on-the-ground perspectives of educators working hard to get these changes right despite the challenges they face.
In this interview with Center Maryland, MSEA Vice President Cheryl Bost discusses all the changes taking place in Maryland’s classrooms, including Common Core, shifting student assessments, new teacher and principal evaluations.
Read about MSEA President Betty Weller's recent statements on MSA testing and the best path forward for students. |
Checked the game camera last weekend out at a friend’s lease. The fall season is shaping up pretty good. This big solitary boar was seem frequently at the feeder. That’s one of the largest hogs I’ve seen in this area… Probably weighs close to 200 pounds if I had to guess.
There aren’t many wild hogs in this area, but partly because there aren’t many they’re not hunted much and this allows the ones that are here more opportunities to grow to sizes like this. To get an idea of just how big this pig is, compare him to the yearling deer snapped by the same game cam.
September 1st marks the beginning of hunting season here in Texas, and you better believe I’ll be out stalking the woods. Not having access to private hunting property, I’m left accessing public lands. Texas doesn’t have much public hunting land compared to many other western states, but there are still nearly 1 million acres available to anyone with a hunting license and annual public hunt permit.
Archery season for deer doesn’t begin until October 2nd, but certain Wildlife Management Areas (WMAs) open up for feral hog on September 1st. I haven’t hunted this particular parcel of land before, and won’t have time to scout it before heading out. Here, the internet becomes incredibly valuable. Using Google Earth, I surveyed the WMA. The detail of the aerial satellite photos makes it possible to discern hardwood stands from pine forests. For even more detail however, I use an even more incredibly detailed and useful resource, the Gap Analysis Program’s Land Cover Viewer.
Developed by the University of Idaho and the National Biological Infrastructure Information program, the highly detailed land map uses advanced satellite and thermal imagery to determine the precise makeup of plant life, soil, and water. Being paid for in part with your tax dollars, the program is made available to anyone on the internet. At Level 3, the most detailed level, you can see all 590 different types of terrain.
Since I’m going to be after feral hogs, I’m looking for mostly hardwood oak forests and riparian terrain with dense cover. Using the Land Cover Viewer it’s easy to zoom in on the terrain and determine where that cover is. In the mosquito infested Texas summer heat, hogs seek out cool relief from wallows they make in muddy terrain. Ravines, bogs, swamps, and flooded timber all make excellent hidey holes for heat stressed porkers.
This trip is primarily to scout out a good location to set a blind for deer season. No amount of research or scouting via maps and photos on the internet can ever substitute for actual boots on the ground. Still, I’m hopeful that a bit of research on the internet might result in my finding a nice hog or two, if they’re there, while I’m out scouting for deer. |
French fashion designer Hubert de Givenchy, who created famous looks for Audrey Hepburn, Grace Kelly, and Jackie Kennedy, has died at the age of 91.
The designer is best known for the “little black dress” worn by Audrey Hepburn in Breakfast at Tiffany’s. Furthermore, the designer broke from French couture tradition when he hired a group of nonwhite models in the early 1970s which are still considered controversial.
He dressed Audrey Hepburn and Jackie Kennedy and forged a timeless style for a golden age. The designer’s elegant tailoring and eye for a perfect line, combined with the unusually spare taste of Hepburn, created style magic. Together they forged a refined image of pared-to-the-bone glamour that still looks chic more than half a century later. As a result, we owe much to Givenchy for creating a sartorial language everyone was eager to learn and here, we celebrate the late designer’s work as seen on his most famous patron.
It is a sleeveless, floor-length gown with fitted bodice embellished at the back with distinctive cut-out, the skirt slightly gathered at the waist and slit to the thigh on one side, accompanied by a pair of black elbow-length gloves.
Consequently, the little black dress attained such iconic fame and status that it became an integral part of a woman’s wardrobe.
Black fitted long dress, rhinestone tiara, most noteworthy necklace, and pointy-toe pumps.
Hepburn insisted that Hubert de Givenchy design all her clothes for Funny Face, saying simply, “His are the only clothes in which I am myself.” Good thing the producers agreed, because the designer’s red gown lent itself to the most unforgettable wardrobe moment of the film.
Red Bustier evening gown, suede slingback pumps, long satin gloves and crystal statement collar necklace.
On January 18, 1969, at age 39, Audrey Hepburn married Italian psychiatrist Andrea Dotti. The ceremony was held at the town hall in Morges, Switzerland. Audrey Hepburn wore a pink wool dress with matching head scarf designed specifically for her by longtime friend, Hubert de Givenchy.
Doodle-embroidered wool dress, Givenchy L’Interdit Eau de Toilette Spray, patterned scarf, and white leather pumps.
Hepburn serves as ultimate style muse in Charade, at the height of the 1960s. She stars alongside Cary Grant and sports round, oversize sunglasses, red and yellow wool coats designed by Hubert de Givenchy, leopard print headscarves and tiny pearl earrings.
Mustard long coat, Staud Bissett bag, Carolee pearl stud earrings, and Gucci leather gloves with pearls. |
Shaili Chopra: My heroes were all men, said Nicola Adams when she became the first woman to win an Olympic boxing gold. It was a telling statement – not a wrong one – about inspiration and aspirations around us. Adams wished she had a female box to 'want to be like'. We all hunger for motivation and brilliance, and in most cases it may well be gender-agnostic but look around and one wonders why there remain such few female role models. In the Indian context, we need to change this more than ever.
If you Google 'lists' of the last decade and you'll find the same women reigning the charts. They have aged, succeeded further but those lists remain narrow and concentrated on just a handful of women. We have remained stuck in our definition of idols. Notion of role models doesn't have to be limited to legendary figures, or women CEOs or even political achievers. Our world – driven by digital – is changing this even more. Every woman is a leader and most of all she is more of a 'real' model than just a role model. We have inspiration among women building digital empires from home, those spending hours teaching young children of the neighbourhood to those who run assembly lines at manufacturing units with a 9 to 5 job. What is holding us back from recognising these women? From celebrating their big and small achievements? Why must we be steered by outdated norms of role models and constrain the idea of inspiration?
Change the definition and we may change the attitude? Esther Dufflo and her colleagues have studied several aspects around women and I recall reading on particularly effective conclusion. "Seeing women in charge persuaded parents and teens that women can run things, and increased their ambitions. Changing perceptions and giving hope can have an impact on reality," she had noted. This study was done in West Bengal so it was real in the Indian context. These conclusions may well be true of urban India, corporations, and other organisations. We get inspired by our environment. And if our environment encourage women – given that historically they have been left out of the success journey in big numbers – we are bound to have a gender-democratic attitude.
India Inc is also among those who need to fix the gap. Though it's hard to confirm whether some forced efforts – such as Sebi's directive to increase women on boards – are pushing for encouraging women to high positions, one hopes the high-clipped awareness and recognition around what women bring to boardroom is being noted. It's not just an appointment but a new mentality. A paradigm shift. An HBR piece highlighted an important missing link in the approach. "Becoming a leader involves much more than being put in a leadership role, acquiring new skills, and adapting one's style to the requirements of that role. It involves a fundamental identity shift. Organisations inadvertently undermine this process when they advise women to proactively seek leadership roles."
Take the example of our entrepreneurship boom. India has hundreds of stories of female success and now is a good time to build upon those bricks of brilliance. Not only are they instrumental in paving way for others but also bring tremendous peer experience giving many the benefit of learning.
Modelling ourselves on others, learning their story and struggles is part of human nature. And so just like men from centuries, women with exemplary stories, successes, struggles must be hailed. Time to start that process is now. |
The discerning photography quickly expanded across the country from Bonn, later traversing national boundaries to find allied initiatives in the US. These parallel movements abroad had a different genealogy but were headed in the same direction. In Germany, Bernd and Hilla Becher were the promoters and key figures of the development. Their typological representation of little-noticed industrial architecture – in the mode of a visual story without a plotline – paved the aesthetic way forward. Significant motifs of this new photography were empty urban streets, inconsequential everyday architecture, sometimes, though less frequently, people, dismal public and private interiors, rough and jumbled nature views, and many unsightly byproducts of industrial civilization. Fairly “unattractive material,” in Peter Galassi’s opinion, yet nevertheless a succinct commentary on the long ignored underbelly of modernity’s affluence, individual freedom, consumption, and glamour.
The other three were Candida Höfer, Tata Ronkholz, and Axel Hütte – the first major appearance of the Becher School. Wilhelm Schürmann was responsible for lighting the first spark for In Deutschland, while Ulrich Görlich and Wilmar Koenig (the two Schmidt students), Johannes Bönsel, Hans-Martin Küsters, Martin Manz, and Hartmut Neubauer contributed in turn. Schmidt and the Bechers, the only Germans represented in New Topographics, provided the earliest connections to the advanced US scene – to Robert Adams and Stephen Shore respectively. |
Landlords in every state may collect a security deposit from new tenants, but all states require that every security deposit is refundable at the end of the tenancy, if the property is in the condition it was at move-in, minus “normal wear and tear”. Put another way, you may only apply the security deposit for “damage” to the unit or its fixtures.
As you may have experienced, defining “normal wear and tear” versus “damage” is often a source of disagreement between landlords and tenants. In this article, I explain “normal wear and tear” and “damages,” provide concrete examples of each, and discuss good practices for minimizing conflict on this issue.
Three of the four definitions listed above contain the word “normal,”and we all know that “normal” can be subjective.
misuse or neglect that results reduces value, usefulness, etc.
The definition of “damage” also includes some pretty subjective terms.
Although “normal wear and tear” and “damage” are difficult to define, you can nonetheless protect both yourself and your tenants from misunderstandings or confusion. As with most things, communication is the key: if both you and your tenants are clear about the condition of the unit at move-in, the importance of promptly reporting needed repairs, and expectations at move-out, the tenancy and the end of the tenancy will be smoother.
Insist on a walk-through with new tenants. At the walk-through new tenants will have an opportunity to note in writing existing damage and wear and tear in the rental. Encourage tenants to examine the rental from floor to ceiling, open and close doors, test all appliances and locks, looks for leaks in the kitchen and bathrooms, and look for signs of pest infestations. In addition, consider taking dated photographs of the unit for your tenant file. Both landlords and tenants are protected by the walk-through””Tenants can’t be blamed for damage that was noted in the file at the beginning of the tenancy, and landlords have a baseline to refer to upon move-out. Many walk-through forms and checklists are available online.
Require in the lease that tenants promptly notify you of needed repairs. Encourage your tenants to help you maintain the rental unit””it’s in everyone’s best interests. Make it clear to tenants that if they don’t notify you of a leaky pipe or broken dehumidifier, they could be responsible for any damages (such as mold and rotting wood). Make it easy for tenants to notify you by making your contact information available in different formats, such as on business cards, your website, or even refrigerator magnets.
At move-in provide tenants with a “Wear and Tear versus Damages” document and a cleaning checklist. Several websites (including nolo.com and some state websites and landlord or tenant organizations) provide charts that list different examples of “normal wear and tear” and “damages.” Give new tenants similar charts and have them initial them at the time they sign the lease agreement. In addition, give new tenants a cleaning checklist, so they know what will be expected of them at move-out.
Before move-out refer tenants to the “Wear and Tear versus Damages” document and cleaning checklist. Being reminded of the difference between “normal wear and tear” and “damage” can be helpful to tenants when they are cleaning their rentals in preparation for moving out. For example, if a ceiling fan bulb is missing, they can replace it; and they will know to remove any trash piles or waste from the yard. Also give your tenants the cleaning checklist at move-out to clarify what you expect them to clean.
As with many landlord and tenant matters, clear communication can minimize misunderstandings and conflict. Always conduct a walk-through at move-in to give you and your tenants an opportunity to note specific damage and wear and tear in the rental property. Both you and your tenants should sign the document. Encourage your tenants to report any needed repairs, and respond promptly to their requests. Finally, provide tenants at move-in and move-out with a list of examples of “normal wear and tear” and “damage,” along with a cleaning checklist. |
Donnie Campbell is a professional ultra-marathon runner and endurance running coach. In December 2016 he took on Ramsay’s Round; a 24-hour challenge taking in 24 mountains covering 98 kilometres and requiring 8,500 metres of climbing – the equivalent to ascending Mount Everest from sea level.
I’m sat on a frozen rock high up on the Grey Corries, an unforgiving mountain range in the West Highlands of Scotland. It’s taken me 17 hours to climb over 7,000 metres, but I’ve bagged 18 of its 24 mountains – known as Munros in this part of the world. Unfortunately, the nausea has put me off eating, and the vomiting has ensured that my stomach is now empty. My legs are heavy due to the snowy conditions underfoot, while the agony of the Morton’s Neuroma – a type of inflammation of the nerves - is making every step feel like someone stabbing the ball of my foot with a roasting hot iron.
As the sun begins to set, I can see in the distance the six remaining Munros left to climb. Soon it will be dark, and I will be facing another freezing winter’s night in the mountains. My mind begins to wander. How did I end up in this situation?
The ‘Central Governor’ theory outlines how your brain will cause you to slow down before you reach your physiological max, to protect homeostasis and stop you from physically running yourself to death.
It was going to take everything I had to get me over the remaining Munros. To complete the Round in under 24 hours, I knew the next few hours were going to be the toughest in my life. I was going to have to dig deeper than ever before, push my physical and mental endurance to the very limits, and boy was it going to hurt. I stood up and set off again, smiling.
My university studies in sports coaching and development allowed me to apply physiological and psychological theory to help me understand how, even as a teenager, I was able to push myself to almost breaking points. To discover the method in the madness, if you like. I’ve always been interested in the power of the mind, and how it can impede or improve physical performance. There is a lot of evidence out there now to suggest that it is the brain that is the limiting factor when it comes to physical performance, not the body.
The ‘Central Governor’ theory, developed by exercise physiologist Tim Noakes, outlines how your brain will cause you to slow down before you reach your physiological max, to protect homeostasis and stop you from physically running yourself to death. Another physiologist, Samuele Marcora, proposes a similar theory; essentially that the brain has the overriding say on how far you are going to be able to push yourself on any given day.
Their conclusions tally up with my experiences. In 2011, I took part in the West Highland Way ultra-marathon: 95 miles between Glasgow and Fort William. With 12 miles to go, I experienced my first ever projectile vomit. My abdominals were in an uncontrollable spasm as a litre of fruit smoothie returned moments after I’d (unwisely) consumed it. It was not pretty, and the thought of calling it quits did cross my mind. But not for long. I shuffled, walked and crawled to the finish line to complete the race in 22 hours 35 minutes. I finished six hours behind the winner.
The next day I could barely walk, but was already planning my next run. I wanted to see how far I could go without sleep: despite my suffering, I sensed that 95 miles was not my limit. The following year I would once again run the West Highland Way, only this time I’d carry on north to the Isle of Skye, a route totaling over 180 miles. My target was to complete it inside 48 hours.
I first encountered the very meaning of this quote, attributed to 20th Century psychologist William James, 42 hours into my run to Skye. With just over half a marathon to go and one last big climb, I felt a sense of calm come over me and a renewed energy where running felt effortless again. I had previously experienced what people refer to as ‘runner’s flow’, where you feel like you are gliding over the trail or mountains. Sometimes it lasts for minutes, other times a bit longer. But this was different.
I had been fighting against mental and physical fatigue, everything ached, my feet were bruised and battered. But all this seemed to disappear as I found a flow where my pace picked up and my mind - previously occupied with all the pain and fatigue - was now 100% focused on the last few miles. I ran the last half marathon in under two hours, completing the 184 miles in 44 hours and 30 minutes.
At 4am on the second day, the sun had just risen, and I was running along a road in the middle of nowhere, shouting at myself as loud as I could to keep pushing and maintain my pace: “I HAVE GOT THIS!”, “I CAN DO THIS!”.
What had changed in an instant? What had happened in the last 12 miles to make running seem so easy, when it should have been hard?
In Alex Hutchinson’s book Endure: Mind, Body and the Curiously Elastic Limits of Human Performance, he looks at human endurance and how, if someone is highly motivated, then they will suffer longer and push their body further to achieve their goal. During that run, I was the most motivated I had ever been. At 4am on the second day, the sun had just risen, and I was running along a road in the middle of nowhere, shouting at myself as loud as I could to keep pushing and maintain my pace: “I HAVE GOT THIS!”, “I CAN DO THIS!”.
An old lady drove past and stopped to ask if I was ok. I told her I was fine, and that I was running to catch the ferry from Mallaig to Skye, prompting her to politely offer me a lift. Even now, I can still see her face contorted with confusion when I declined!
I have always used positive self-talk when exploring my limits of physical and mental performance. It has been shown in numerous studies to increase performance, and is widely accepted in sports psychology as a great coping strategy for when things get tough.
I’m approaching the most technical bit of Ramsay’s Round: traversing an exposed ledge and climbing a steep 30-foot face covered in snow. Support runners Tom and Andrew have come to run alongside me for the last stretch. However, sickness has forced Andrew to bail out after three Munros, and he’s unwittingly taken our micro crampons and head torch batteries back with him. It’s a blow, but I’ve come too far to stop at this point. If necessary, we’ll have to cut steps into the snow with our ice axes and feet. We ascend into the darkness.
Each step, each kick, each ice axe placement has to be perfect. I’ve been on the go for more than 19 hours, but I can’t afford a lapse in concentration here - one wrong foot or hand placement could lead to a fall to almost certain death. Sweat drips off my face from the effort, but as we get over the crux climb, I can see the path leading up toward Munro number 20.
Sometimes, the battle with your mind is one that can’t be won. My first ever DNF (did not finish) came at the Mont Blanc 80km race in 2014. Two weeks before the race I had been suffering from a parasitic infection and had just finished my course of antibiotics before the race. I thought I was 100%. But 15km in I realised this was not the case. As soon as I didn’t feel myself, the negative thoughts cascaded through my brain (“what’s the point of grinding out a finish?”, “why carry on when I know I can’t perform to my best?”). After 30km, I stepped off the trail, handed my number to the marshal and withdrew. As I fought back tears, it proved a far cry from what I imagined my first DNF would look like. It hurt.
Looking back now, it makes sense why it happened. I had lost my motivation, which meant my brain was going to call it quits a lot sooner than when I normally would. I was also planning on asking my girlfriend Rachael to marry me after the race, so I was not 100% focused. Thankfully Rachael said yes, so my first DNF actually turned out to be a great long weekend in Chamonix!
Rachael shares my love for ultra-running, and she is an incredible endurance athlete, one of the toughest I know. We push ourselves and help to motivate each other before and during races. Not long after we got married, she was running a 50-mile trail race when, 20 miles in, she hurt her leg leaping off a rock on a steep downhill. As it progressively got worse, she told me that she had envisaged what I would say: “can you walk on it?”, “can you run on it?”, “keep going then!”. It turned out that she had fractured her fibula. Yet not only did she finish, but she actually won the race. It shows that our motivations to push ourselves, to continue past breaking point, and to endure, can come from a variety of sources and inspirations.
As I stand on Ben Nevis, the UK’s highest mountain, I’ve got just 55 minutes and 1,345 meters of quad-thrashing descent standing between me and a new winter record. I know I can do it. I hurtle down the Ben, trying to stay on my feet through the ice. My heart is pumping, the adrenaline is rushing through my veins, and as rocks kick up off the mountain I keep pushing harder. It’s time to completely empty the tank and hold nothing back.
I splash full speed into River Nevis and seconds later I’m lying crumpled on the finish line tarmac. I have nothing left, unable to respond to those wishing me congratulations. I’ve given it everything I have, I’ve pushed myself to a new limit. And that, for me, is a success. It’s not about breaking records or winning races. It’s about the adventure, the challenge, and exploring my own physical and mental peak. |
Automated machine learning has become a topic of considerable interest over the past several months. A recent KDnuggets blog competition focused on this topic, and generated a handful of interesting ideas and projects. Of note, our readers were introduced to Auto-sklearn, an automated machine learning pipeline generator, via the competition, and learned more about the project in a follow-up interview with its developers.
Prior to that competition, however, KDnuggets readers were introduced to TPOT, "your data science assistant," an open source Python tool that intelligently automates the entire machine learning process.
For scikit-learn-compatible datasets, TPOT can automatically optimize a series of feature preprocessors and machine learning models that maximize the dataset's cross-validation accuracy, and outputs the optimal model as Python code leveraging scikit-learn. The machine learning pipeline generation and optimization project is helmed by well-known and prolific machine learning and data science personality Randy Olson.
Randy is a Senior Data Scientist at the University of Pennsylvania Institute for Biomedical Informatics, where he works with Prof. Jason H. Moore (funded by NIH grant R01 AI117694). Randy is active on Twitter, and some of his other projects can be found on his GitHub. Of note, Randy has put together a really great Jupyter notebook collection of data analysis and machine learning projects, and also has a self-explanatory project called datacleaner which may be of interest to some.
Matthew Mayo: First off, thanks for taking some time out of your schedule to speak with us, Randy. You have previously shared an overview of your automated machine learning library, TPOT, with our readers, but what if we start by having you introduce yourself and provide a little information on your background.
Randy Olson: Sure! In short: I'm a Senior Data Scientist at the University of Pennsylvania's Institute for Biomedical Informatics, where I develop machine learning software for biomedical applications. As a hobby, I run a personal blog (randalolson.com/blog) where I apply data science to everyday problems to show people how data science can relate to almost any possible topic. I'm also an avid open science advocate, so all of my work can be found on GitHub (github.com/rhiever) if you ever want to learn and find out how my projects work.
TPOT is a collaboration project, correct? What about your co-conspirators; could you give us some info about them or point us in a direction to find out more?
Daniel Angell is a software engineering student at Drexel University who helped a ton with the TPOT refactor over Summer '16. Whenever TPOT exports directly to a scikit-learn Pipeline, you can thank Daniel.
Nathan Bartley is a computer science Master's student at the University of Chicago who was heavily involved in the early TPOT design phases. Nathan and I co-authored a research paper on TPOT that ended up winning the Best Paper award at GECCO 2016.
Weixuan Fu is a new programmer at the Institute for Biomedical Informatics. Even though he's new to the TPOT project, he's already made several major contributions, including placing time limits on TPOT pipeline evaluations. It turns out that placing a time limit on a function call can be pretty difficult when you need to support Mac, Linux, and Windows, but Weixuan figured it out.
Your TPOT post was informative and described the project quite well. It has been several months, however, and I know that you have been promoting and sharing the project, which now has over 1500 stars on Github and has been forked nearly 200 times. Is there anything additional of note you would like our readers to know about TPOT, or any developments that have occurred since your original post? Is there anything you would like share about future development plans?
TPOT supports regression problems with the TPOTRegressor class.
TPOT now works directly with scikit-learn Pipeline objects, which means that it also exports to scikit-learn Pipelines. This makes TPOT's exported code much cleaner.
TPOT explores more scikit-learn models and data preprocessors. We've been fine-tuning the models, preprocessors, and parameters with every release.
TPOT allows you to set a time limit on the TPOT optimization process, both at the per-pipeline level (so TPOT doesn't spend hours evaluating a single pipeline) and at the TPOT optimization process level (so you know when the TPOT optimization process will end).
In the near future, we're primarily going to focus on making TPOT run faster, especially on large data sets. Currently, TPOT can take hours or even days to finish on large (50,000+ record) data sets, which can make it difficult to use for some users. We have a whole bag of tricks to speed up TPOT --- including parallelizing TPOT on multi-core computing systems --- so it's just a matter of time until we get those features rolled out.
Clearly TPOT could be used in a variety of domains and for numerous tasks, likely as many as we could imagine machine learning itself could be utilized for. I imagine that, given its development history, you do use it in your day job. Could you give us an example of how it has made your life easier?
One of the best examples I have is when we applied TPOT to a bladder cancer study that my boss, Prof. Jason H. Moore, had collaborated on years ago. We wanted to see whether we could replicate the findings in the study, so we used a custom version of TPOT to find the best model for us. After just a few hours of tinkering with the models, TPOT replicated the findings in the original study and found the same pipeline that took his collaborators weeks to figure out. As an added bonus, TPOT discovered a more complex pipeline that actually improved upon what his collaborators found by discovering a new interaction between two of the variables. If only they had TPOT back then, eh?
Where do you see automated machine learning going? Is the end game fully automated systems, with limited human interference, ushering in the decline of data scientists and machine learning experts? Or is it more likely that automation simply becomes another tool available to assist the machine learning scientist?
In the near future, I see automated machine learning (AutoML) taking over the machine learning model-building process: once a data set is in a (relatively) clean format, the AutoML system will be able to design and optimize a machine learning pipeline faster than 99% of the humans out there. Perhaps AutoML systems will be able to expand out to cover a larger portion of the data cleaning process, but many tasks --- such as being able to pose a problem as a machine learning problem in the first place --- will remain solely a human endeavor in the near future. However, technologists are infamously bad at predicting the future of technology, so perhaps I should decline to comment on the long-term of where AutoML can and will head.
One long-term trend in AutoML that I can confidently comment on, however, is that AutoML systems will become mainstream in the machine learning world, and it's highly likely that the AutoML systems of the future will be interactive. Instead of the user and AutoML system working independently, the user and AutoML system will work together: as the user tries out different pipelines by hand, the AutoML system will learn in real-time from the user's experience and adapt its optimization process. Instead of providing one recommendation at the end of the optimization process, the AutoML system will continually recommend the best pipelines it's discovered so far and allow the user to provide feedback on those pipelines --- feedback that is then incorporated into the AutoML optimization process. And so on. In essence, AutoML systems will become akin to "Data Science Assistants" that can combine the tremendous computing power of high-performance computing systems with the problem-solving ability of human designers.
As a follow-up to the previous question, do you see data scientists and others using machine learning becoming unemployed anytime soon? Or, if too drastic an idea, will the current hype surrounding data science be tempered by automation in the near future, and if so, to what degree?
I don't see the purpose of AutoML as replacing data scientists, just the same as intelligent code autocompletion tools aren't intended to replace computer programmers. Rather, to me the purpose of AutoML is to free data scientists from the burden of repetitive and time-consuming tasks (e.g., machine learning pipeline design and hyperparameter optimization) so they can better spend their time on tasks that are much more difficult to automate. For example, parsing a heterogeneous HTML file into a clean data set or translating a "human problem" into a "machine learning problem" are relatively simple tasks for experienced data scientists, yet are currently out of reach for AutoML systems. My motto: "Automate the boring stuff so we can focus on the interesting stuff."
Any final words on TPOT or automated machine learning?
AutoML is a very new field, and we're only just tapping into the potential of AutoML. Before we move too far along in this field, I believe that it's important to take a step back and ask: what do we (the users) want from AutoML systems? What would you expect from an AutoML system, dear reader? |
Today’s health care leaders are tasked with significant challenges in how they deliver patient care and manage the workforce. These challenges have highlighted the need for leadership-development training to enable physicians to transform how health care organizations function. The competencies that are required for a physician to be an effective leader have evolved substantially over the past decade in accordance with a growing view of health care as a complex adaptive system. However, competency-based training in the form of classroom lectures, role modelling, hypothetical practice scenarios, and self-help activities are not sufficient to develop physicians to lead the future of the complex health care industry.
In the current environment of complexity and uncertainty, a different approach is needed. Action learning is a leadership-development process in which small groups work on real-world organizational business problems. As part of this process, individuals and teams reflect on their own work in a supportive environment in which a balance of action and learning is key. While most leadership-development programs focus on leadership characteristics only, action-learning scholars have come to discover the value of designing programs that recognize the context in which the leaders perform. Questioning, reflecting, and listening promote exploration and creativity among team members, thereby generating innovative solutions. Learning happens as an iterative cycle of action-learning-action-learning, and coaching is necessary to guide the group members in reflecting on how they are approaching the problem.
Action learning has been shown to improve broad executive and managerial leadership skills as well as the ability to develop integrative, win-win solutions to challenging situations. As such, it can be an excellent solution to help leaders navigate through the adaptive challenges currently facing health care organizations.
In September 2015, Mayo Clinic embarked on an action-learning program called “Fresh Eyes” involving a group of 30 participants comprising physicians, scientists, administrators, and nursing leaders in the leadership succession talent pool. The program objectives were to develop strategic thinkers who would be able to effectively lead in the VUCA (volatile, uncertain, complex, and ambiguous) world of the health care industry.
The program was structured as a 6-month process that started with a 2-day face-to-face kickoff, three subsequent video conference meetings, and a 1-day report-out at the end of the program. Participants were assigned to one of six multidisciplinary teams that were intentionally designed to maximize diversity in terms of roles, functional areas, geographical sites and regions, gender, and ethnicity. Each team was assigned a project that required its members to address a key business challenge related to such topics as physician referrals, test utilization, patient access, integrative medicine, and so on. The team projects were selected by institutional leaders in clinical practice, research, and education based on organizational strategic priorities.
The projects and teams were chosen so that most of the program participants had very limited knowledge of and experience with the subject matter. Team members were thus expected to bring a “fresh eye” perspective that could shine a new light on topics that were both important and challenging for the organization. Each team was assigned two executive sponsors and two project sponsors whose role was to guide the team to information and resources without providing too much direction. Each team was also assigned a team coach whose role was to engage the teams in deliberate reflection for facilitated learning and to meet with the team members on a one-on-one basis to help them to achieve their individual learning goals. The final deliverables were a business plan and a 20-minute presentation to all Fresh Eyes participants and sponsors.
Once the projects had been assigned, the team members met as a group once a week or every other week, gathered data and information on the topic; asked each other critical, reflective questions; and moved quickly to propose plans within the given time frame while learning how to work together as a highly functioning team. Thus, the program created a sense of urgency to move forward by letting go of the need to have all information before making decisions.
As the teams had no assigned leader, each team had to identify one or more leaders whose role was to clarify the roles of the various team members and to move them toward the achievement of a common goal. The team leaders had to gauge intuitively how much to step in or step back in group interactions. In the process, the leaders learned to listen to other team members and incorporate their perspectives by acknowledging that there are multiple ways to view a single problem, depending on each member’s role, type of practice, or level of knowledge in the subject matter. This was an important part of leadership development because Mayo Clinic is unique in its dyad/triad leadership model in which physicians partner with their administrative and nursing leaders, and the physician participants had to learn how to best leverage the wide range of organizational and business knowledge that their administrative counterparts brought to the project.
One of the main challenges was setting aside time to work on team projects. There was a clear concern among participants at the start of the program about doing project work in the midst of their busy schedules. To find a time that worked for all members, some teams met early in the morning before work or late in the evening despite having to accommodate individuals in three time zones.
All business proposals were presented to the sponsors during the report-out portion of the program. Teams were also asked to present their proposals in executive team meetings, committee meetings, and department meetings. Six months after completion, a follow-up email was sent to all project sponsors for updates on the status of the implementation. We found that some short-term recommendations had been implemented, sub-projects had been identified and were underway, and many participants had expressed interest in taking part in the implementation phase.
Health care organizations are a complex compilation of many different micro-environments. Physician leaders tend to view the world through the lens of their specialty, which has been formed through many years of training and practice, and usually have limited venues for exchanging ideas outside of their area of expertise. Those in leadership positions should understand that what seems to be true may evolve when viewed from a different perspective that they might not have considered.
We found that the Fresh Eyes action-learning program “forced” participants to network and to collaborate with team members in ways that would not otherwise have occurred to them. During the course of the program, participants had the opportunity to talk about the guiding values and principles of the organization. Some physicians confessed that they initially felt uncomfortable having to share their individual development goals and engage in open reflection with people they did not know, but they also noted that they eventually became more comfortable as the team members built a level of trust with one another.
The business proposals that resulted from the project were a testament to how much they could accomplish with sense of urgency, teamwork, and commitment. At the end of the program, the participants were amazed at how they could create proposals for seemingly daunting topics within such a limited time frame, with limited resources, and with team members whom they hardly knew.
We found that the Fresh Eyes program successfully achieved a balance of learning and action by stimulating the team members to engage in enterprise-level thinking, to learn how to collaborate as an effective team, and to develop specific proposals designed to improve organizational performance.
How you addressed the clear concern among participants about doing project work in the midst of their busy schedules?
What role your culture and senior leaders had this program's success?
Action learning is used for multiple purposes; organizational development, or in communities where people can come together and solve problems.
As was mentioned in the article time is an issue for not just action learning but any learning and development initiative in healthcare. What we noticed was if the projects are meaningful (impactful to the organization that addresses real issues fir the future of healthcare), if there is enough support from high level leadership and high visibility, people will make time. Some were considering turning there project into a publication. Culture and senior leadership support is critical. The senior leaders played the role of sponsors for each projects and provided advice and resources, which was a motivator for the leaders.
Thank you for your questions. Action learning is used not only for professional development but also for organization development or to solve issues and problems for communities. Time was a concern for the participants, so we feel that it is important for the projects to be meaningful not only for their own development and learning, but also for the organization as a whole and the future of healthcare. Culture and the support of senior leaders are critical. The senior leaders played the role of sponsors and they provided information, resources and got to know these participants well, which contributed to participant's exposure to higher level leadership.
It is very true. The competency or self-help/reliance curriculum , unfortunately, has not been sufficiently developed and scholars often found them selves puzzled whether they would survive the system after graduation.
It seems that the "effective communication" has been replaced with so much bureaucracy, ranking and endless administrative procedures and protocols.
Often there are lack of "effective communication" among various groups such as the physicians, surgeons and nurses, etc.
Without "effective communication" there won't be "harmonised collaboration" that is of extreme essence within the health care system.
When the "effective communication" dissolves and disappears in the maze of protocols and ranks collaborations lose the harmony and often breaks down and that is the major problem in the hospitals and clinical institutes, I think.
As the article highlights the importance of being open and flexible as well as the ability to share information "effectively" but not superficially helps to solve the problem.
But the change of the way Medical system is being managed is necessary and to implement it , one must start with the education first. the whole culture with which scholars are being prepared must be transformed, as the article briefly touch the issue. |
This frequently-visited site sits at the entrance of a limestone canyon that is well-known for three particular features. The first is the bright, curving orange striations in the rocks. Next, are the giant oyster fossils that are found in the rock layers, followed by the tinaja itself.
The Ernst Tinaja is a 13-foot, naturally-formed rock pool which is also referred to as a "kettle". Tinajas are pools or pockets that are formed in the bedrock. This is caused by being at the base of waterfalls or by the recurring erosion from ephemeral desert streams or arroyos that carry sediment which, over time, can carve out deep shapes in the rock. The Ernst Tinaja often has a concentrated, green water in it that can be toxic if consumed. Occasionally, animals fall into these holes and are unable to get out due to the sheerness of the walls.
Past the Ernst Tinaja, that are many opportunities for bouldering up in the canyon. The sandy conditions on the rock surfaces can be slick, so be careful with your footing as you explore.
Visitors should be very aware of any chance of rain in this area. This canyon is prone to flash floods and can be extremely dangerous if there is any sort of precipitation.
Short Hike: On the park's east side, head north from the paved road on Old Ore Road for 4.5 miles. This road is suggested for high-clearance vehicles only. Look for signs for the short, 0.2-mile access road that leads to the Ernst Tinaja Primitive Roadside Campsite. Continue past the campsite for the parking area for the trailhead to Ernst Tinaja. It is about a half mile to the tinaja, for around a one mile hike in total.
Family Friendly: A short hike through an exciting canyon to a naturally-carved, rock pool. Be careful on the slick rock surfaces especially when near the tinaja.
My favorite hike in Big Bend. The whole walk is up a wash. Unique rock formations.
Cool to see a part of the park so different from Chisos Basin. However Ernst Tinaja was a little underwhelming.
Great rock formations. Early or late light. |
Alcohol and cannabis are America’s two favorite sources of inebriation. But despite their somewhat parallel popularity, the two substances have followed radically different paths in how they are viewed, particularly by the eyes of the law.
Socially, we know they are often enjoyed in tandem. When it comes to their chemical impact, it makes a big difference which one hits your lips first. And, as is often the case with alcohol, the relationship between them ranges from jovial to adversarial, with a lot of complications in between.
There is no simple answer to how alcohol and cannabis combine in the body and mind, because the effects change based on which enters the system first. A smoke before or shortly after a drink makes one absorb less alcohol than the drink by itself due to the effect of weed on intestinal motility — which is to say, cannabis affects how alcohol moves through the body. On the flip side, a pre-cannabis drink can amplify the high, an effect shown in both subjective reports and chemical studies, like this one from the American Association for Clinical Chemistry. Alcohol increases the absorption of THC, and this is detectable in blood samples and mood.
Of course, both cannabis and alcohol can have widely different effects based on an individual’s tolerance and biochemistry. One person’s smooth sailing is another’s rocky seas, and this is doubly so when combining substances. For many, the combination can tip into nausea very easily — we’ve all held a “crossfaded” companion’s hair back or fetched them a glass of water while they regurgitated the results of being too gung-ho about smoking and drinking. As always, taking things slow and steady is advisable.
There’s a reason we so often refer to the federally criminalized status of cannabis as “prohibition”: It invokes alcohol prohibition, an experiment that has more or less universally been deemed a failure. From 1920 to 1933, alcohol could not be produced, imported or sold in the United States. This pushed alcohol consumption into the illicit market and turned bars into speakeasies (which, post-prohibition, became just another type of bar).
But unfortunately, as alcohol prohibition ended, cannabis prohibition gained steam. It eventually became federal law via the Marihuana Tax Act in 1937, and, of course, remains illegal in the U.S. to this day. But at the state level, legalization is spreading in the West and Northeast, while making inroads in the Midwest.
One of the key levers cannabis advocates have used to pry away at criminalization is the example of alcohol prohibition. Every point about the problematic impacts of cannabis use can be met with a counter about the much more severe impacts of alcohol. According to the Center for Disease Control and Prevention, alcohol killed roughly 88,000 Americans and caused 2.5 million years of potential life to be lost each year between 2006-2010. In economic costs, excessive drinking cost the U.S. an estimated $249 billion in 2010 alone.
Vox’s Ezra Klein has argued that the primary public health effect of legalizing cannabis will be how it alters alcohol consumption levels. Yet alcohol is widely available for purchase and can be consumed publicly in bars and restaurants without fear of recourse.
The alcohol industry has historically used its legal status to try to keep competition out of the market by lobbying for continued criminalization, including as recently as 2018. Other alcohol groups have figured it’s time to get in on the action, or at least involve themselves in the still-emerging regulatory processes around cannabis.
Though a smoke and a drink go literally hand in hand for some people, alcohol and cannabis have been both allies and adversaries in the public eye. With legalization making more and more progress, alcohol will need to make room for America’s other favorite substance. |
The empath's guide: What is the Truth? And how do you go about deciding it?
What is the Truth? And how do you go about deciding it?
If you've been following my vibrational reviews, you've probably experienced confusion, fear, anguish seeing that most gurus, most people, in fact, have a low vibration.
Vibration is not your energy, or not really. Vibration is a number that shows to what degree you are in truth.
What is truth? Truth is Existence. Truth is how it is. That A is A... no contradiction, not a good idea, not because someone said so, but because it is. Always.
The higher your vibration, the more you are in harmony with life, your thinking, your emotions, your reactions, your reality is in sync with how it is, really.
When your vibration is low, you are seeing only a small fragment of the truth, therefore you are powerless with reality. No matter what you do in unreality, it will never make a difference in reality.
Unreality is where The Man of La Mancha lived: he fought windmills, thinking they were giants. He rescued a princess who was not a princess at all and didn't need rescuing.
It can be funny, and it can be tragic, when you are living in unreality. Funny for others, tragic for you.
How the heck did you end up spending your precious life dealing with unreality, you ask?
The answer is simple: you were brought up by people who also dealt in unreality. They taught you what they "knew" and they knew only unreality. They were miserable too.
When did it begin? I think it began at the beginning of time.
But the significant damage started to be engineered by the people of religion.
Even before religion, human beings wanted to make sense of what was going on: the facts of life that was hard to see from the limited and time-bound perspective of the human mind.
Understanding and controlling your life is the most seductive thing to do: it buys you an illusion that you are in control.
You are never really in control: control is an illusion. Power in interacting with life is possible. Control is impossible. Yet, the history of humanity, on the level of the individual and on the level of the masses, is the history of trying to gain control over the uncontrollable.
When every single person wants control, a sense of safety, certainty, security, the next logical step is the rise the people who see an opportunity of controlling you... by force or by ideology, by false morality.
The feudal lords, the governments, the tax people, the religion people that give you your moral code, commandments, rules and regulations; the legal establishment, the so-called healers or doctors... It is all from and in order so you stop thinking, so you stop being intelligent and be responsible for your actions, your results, your life, your existence.
Thinking was replaced by remembering and memorizing: your faculties of thinking were never fully developed, regardless of your profession, regardless of your intellectual potential. Your faculty (your brain) may have learned one type of thinking, but not the other. When you hear something new, you have no ability to think it through, you only have the ability to compare it with something else, to look who is saying it, but you are too gullible, you have no tools to know, you leave it to others to know... but the something else you compare it: was it true? You don't know.
How did this happen and why did this happen and why is still getting worse today?
One aspect of the issue is mechanical, and is resting with you. You have given over your power to people to tell you what is right and what is wrong, to give you a moral code in which you are a sacrificial animal with no rights. You have given your power over to "experts" that explain the truth to you in language you can understand. 'It makes sense,' you say, but no critical faculties of your got activated before you said that, only the comparison with other things that you don't know first hand but learned from others who didn't know.
You've probably heard of the five blind men that wanted to learn what the elephant was. They were allowed to go in with the elephant and touch it. An elephant is too big for the limited perspective of the touch-based perception of a blind man, so they decided that they would each touch a different part and then they would create an organic whole from their individual experiences.
One was holding the tail. One was touching the elephant's ear. One was hugging the elephant's trunk. Another a leg, and yet another the belly of the elephant. Five people, five different perspectives.
The man by the tail said: the elephant is like a rope... the other by the ear said: the elephant is like a big fan. The third by the trunk said: the elephant is like a water spout. The fourth by the leg said: the elephant is like a column or a pillar. And the fifth under the belly said: the elephant is like the ceiling of a room or a cave.
The scientists, the doctors, the healers, the mystics, the religions are all like a blind man: treating reality as if it were the whole of reality, not just a (maybe) knowable part of it.
My vibration is 993 right now. On a logarithmic scale that means I am interacting with truth, conscious of it, in harmony with it, and got it correctly about 50%. This means that what I say, teach, do, practice is 50% accurate.
Compare that with the people who have 200-300-400-500-600 vibration (600 vibration is equivalent with about 20% accuracy only!): they are the blind men... teaching, practicing life, treating you as if reality were a rope, a water spout, a pillar... etc... Do you get this? And these are the people whose teaching you accept without thinking, out of habit, out of not knowing how to think.
I got lucky because at age three and a half I made a decision from an insufficient amount of data, and I got hurt, badly.
I got lucky because at age 9 I took a math book to vacation without the solution booklet: so I had to do the thinking instead of learning what men before me had discovered.
I got lucky because I am dyslexic and can't read well, so I had to figure a lot of things out myself.
I got lucky because I got massive brain damage in 1998 and in the 10 years after that I had to learn, from the inside out, how the brain works, and what it is that I can do about it.
I got lucky because I was willing to experiment and fail, and fail, and fail, without loss of enthusiasm.
I got lucky because I discovered early that I had a tendency to pretend that I know the answer, when I knew only one answer, a partial answer, or not the answer at all... that is my Soul Correction... the same as Moses' who was arrogant enough to put his own beliefs on the top of the message from the Universe. Commandments are a human invention designed to control.
Moses was also projecting his own human characteristics of anger, jealousy, and other petty human ego stuff on "all-of-it."
Life, intelligence, got arrested in that arrogant act... and I wasn't going to be like that! No, not me, not ever!
Now, what should you do, now that you have this information but probably no critical faculty to decide if it is true or not?
Schools are not in the business of teaching you to think! How and where are you going to learn to use your brain for thinking instead of having thoughts?
Good question. I don't have an answer. I have been wrecking MY brain, no answer yet.
But just knowing that you need it may change your life for the better.
Now, seriously, what can you do to be more aligned with truth? What can you do to take the time to actually think things through, so you know what is true and what is not, and you don't have to be at the mercy of "thought leaders" who do just that: lead your thought!
Remember Steve Jobs, who was a great ceo but a lousy overall thinker: his diet, he was so proud of, killed him. Just read my articles about fructose intolerance: he lived on a an all-fruit diet unsuited for a human being! Monkies, maybe, but not humans. In my opinion, even monkies would run into problems if they had to live on human grown fruit because of the fertilization which doesn't provide trace minerals, trace minerals like copper, that are needed in higher quantities when you load up on sweet stuff, regardless of its source.
One of the things you can do is use the Harmonize audio... It puts you in the path of a huge beam of energy, connecting to all of it, Existence. It is impossible to avoid having your molecules and your coherence re-arranged to more harmonious... favoring sound thinking, favoring peace of mind, seeing the big picture, creating order from chaos.
It's the best first step, the Harmonize audio.
A good second step would be to do the Self-Healing Course: after all you can't be smart if you are not well. If you are not well, you are dull, tired, reactive, surviving, opinionated, and impatient. |
This is a new development WordPress. The editor allows you to work with articles, publications, and pages, making changes to them. Why was it called that way? Everything is simple – he got his name in honor of Johann Gutenberg (1400-1468), the first European printer. The main merit of the master is the development and implementation of a printing press that is unique in its time. Innovative equipment was moving letters. It was a real breakthrough in typography. The creation of the machine increased the availability of books.
Developers WordPress creating a new editor sought to make it easier to understand and work. That he allowed making beautiful pages with minimal time costs.
This is the main difference Gutenberg WordPress from the old version, which, in fact, suggested editing through HTML. The new editor has made editing easier. To place the text the way you want it, align it, place the pictures in certain columns and, again, align them, you no longer need to go into HTML or CSS.
That is, you can even develop complex layouts, “playing” with blocks, trying their different place without any special problems and difficulties, until you find the optimal design, in your opinion.
They replaced the previously used shortcode and widgets. The advantage of blocks is that they can be located absolutely in any part of the page, site.
and all – the system automatically performs the adjustment and adjustment.
There will be no problems and difficulties with this. It is installed directly from the administrative panel. It is also activated. For this, you need to perform only a few clicks.
Design – minimalist, but so pleasant and attractive. Although even compared to the classic version looks very unusual. But in it much more space is allocated directly to the content part, the whole structure of the layout is much better displayed.
All icons for editing blocks are located in the upper left of the screen. Depending on the specific block, the set of functions for editing is changed.
It is noteworthy that the settings are “hidden”. To display them – you need to click on the appropriate settings button. At first, it seems inconvenient, unusual, but after working with the new administrative panel, you not only quickly get used to it, but also understand its true convenience.
Inserts – allows you to embed content from various third-party resources and services, including Instagram, Youtube, Reddit, and others. Here you just need to paste the copied address, and the system will automatically configure the embedding, independently determining the necessary parameters.
To move the block up or down, use the arrows – they are provided on the left of each block. Simply hover the mouse over the block and they will become visible.
Gutenberg WordPress Editor provides both visual and text editing. Switching panels from Visual Mode to Text Mode and back is located at the top right of the panel.
The visual system allows you to use the functions applied to a specific block. Textual – see and apply block code. Each individual block formed in Gutenberg is a set of HTML comments. It uses the syntax that was created by the developers of the new editor. There is nothing complicated about it. Even an inexperienced user will figure out what element belongs to.
WordPress 5.0 is attracted by the presence of new features that allow you to make changes to texts, entries. I will try as fully as possible, but at the same time, and concisely tell about them.
The system identifies all article headers by tags. They are displayed on the side, listing. It turns out a kind of document scheme. This greatly simplifies the visual perception of the text.
Clicking on any of the headlines will move you to the appropriate part of the article. This speeds up the text editing process. This is especially convenient when making changes to large texts. Or when working on smartphones. You will be spared from tedious screen scrolling.
With their help, you can make a link to a specific part of the record. Anchors are put on subtitles.
the text for the link should be placed in this field.
The Gutenberg WordPress editor is also good in that for each separate block you can add additional CSS styles. It is very convenient, since it is possible to create a “skeleton” of the HTML page, and then stylize it in a separate CSS file.
What is the benefit? In fact, for each block, virtually any style is predetermined, by default. In particular, it works well when creating buttons, when changing the font color is not provided directly in the editor itself.
Convenient feature. Allows you to create picture galleries. In the settings of such a gallery, you can specify the number of columns – from one to eight. You can adjust the cropping or not cropping the image.
Gutenberg WordPress has a number of undeniable advantages.
Based on all of the above, and also based on my own experience of using the new editor, I want to highlight a few of its undeniable advantages.
Easy to create the structure of the entire page.
All information presented on the page is located in blocks independent from each other.
Any user, regardless of the level of knowledge and experience, is able to create even the most complex page layouts.
It is possible to create large-scale blocks embedded in different parts of the site.
Embedding content from external resources is very quick and easy. The system itself configures such blocks.
In general, the new editor is essential, surpasses the previous versions in all indicators. Probably in the near future, he will remain the only one within the system. After all, it is great for forming pages of any level of complexity. Moreover, the developers are doing everything in order to increase the speed of its work. |
How long have you been writing science fiction and what sparked your interest in this genre?
I’ve been writing science fiction pretty much since I was a teenager. That and fantasy were my first loves as a reader and a writer. I read a lot of science fiction as a teen: lots of Bradbury, especially, but also Asimov and Arthur C. Clarke. There were movies that sparked my interest too, maybe especially Ridley Scott’s movies Alien and Blade Runner.
Do you do much research of engineering / science / math / space agencies such as NASA in order to more accurately portray the space, its climate, etc.?
I do some research for sure. Most of my stories in Odin’s Eye are set in our own solar system so I would look up maps of the Moon, for example, or research features of various planets and areas like the Oort cloud (that’s where my short story “Johnny B. Goode” is set). For most of the technology, I just invent it and sort of skim over the details of how it works in the story (unless it’s pertinent to the plot itself). Since most of my stories take place in a far future, I can play around rather freely with what society might look like, and what technology is available. The one thing I did the most research on for Odin’s Eye was probably the Voyager probe. I looked at when it was sent out, where it will end up in the future, how it was made… all that stuff. That kind of research is actually a lot of fun to do for a project like this.
There is an interesting juxtaposition of history and the future, using ancient times as the backdrop for the story “The Gates of Balawat.” What gave you that idea? What inspired you to chose this particular piece of history?
I saw the Gates of Balawat at The British Museum in London, and they absolutely fascinated me. They’re beautiful and the imagery on them is very evocative. They made me feel close to that part of history, I guess as though it really might be just on the other side of those doors. After seeing them, I read up on Babylonian and Assyrian ancient history, and thinking about that, and my own reaction to seeing those artifacts is what inspired that story. It was interesting to me to imagine how people in the future might view our museums and preservation of historical artifacts: Would they feel the same connection I did? Or would they just be mystified as to why anyone would keep such things stored in a special building? That’s kind of what I wanted to explore in that story.
Where did you get the idea to draw from Norse mythology as the archetype for Odin’s Eye?
I grew up in Sweden, so Norse mythology is something I’ve had with me since I was a kid. It’s always felt like a rich vein to get material and inspiration and imagery from as a writer. That particular story, of Odin sacrificing his eye in Mimir’s well in order to gain wisdom and the ability to look into the future, that story has always fascinated me. It just seemed like the perfect backdrop for a collection of science fiction stories: to try to peer into the future. Then, I found that image for the cover, of the distant galaxy known as the Eye of God, or Odin’s Eye, and it all just came together for me. I knew I had to use that for the title.
You have written numerous poetry and short story collections. Do you prefer one more than the other? Why?
That is a bit like picking your favorite child I suppose. I like to think I’m always getting better as a writer, so I guess I’d pick my latest collection of poetry (Cuts & Collected Poems), and my flash fiction anthology Dark Flash as my favorites. But I have a lot of love for Odin’s Eye too! Dark Flash feels very close to my heart right now, because I just published it this past December, and that mix of horror and dark fantasy is kind of where a lot of my writing is taking me these days.
You are a music journalist. What was your entree into that line of work? What, if any, similarities do you find when writing about music and when writing poetry, for example?
Brian Basher, who has a terrific online radio show called Hard Rock Nights featuring new and old rock bands, asked me to write reviews for his site back in 2012. That’s how I got started writing about music online. I loved doing that, and later started my own blog Rock And Roll. When I first started writing about music, it was during a period when I suffered from some serious writer’s block and felt unable to write fiction. Writing about music was therapeutic in a way: I love finding new bands to listen to, and it’s fun to share my opinions about it. All writing is creative, and while writing about music is pretty different than writing fiction, it’s still something I enjoy doing.
I was a voracious reader as a child. The Lord of the Rings totally blew my mind when I first read it. It was the first book that made me realize that you could create entire new worlds as a writer, and it opened my eyes to the kind of depth you could conjure up by using maps and languages and an invented world with its own history. It was an intoxicating experience to read it the first time. I don’t think any other book I’ve ever read has affected me as profoundly. I remember crying so hard at the end of the trilogy when I realized the story was over. Ursula K. Le Guin’s original three books about Earthsea also inspired me a lot. Tombs of Atuan is a book that influenced me greatly, more than I probably realized at the time! Bradbury’s short stories were also very much a part of my original “writer DNA.” All the science fiction I wrote as a kid was basically riffs on Bradbury… bad riffs, but that’s how we all start out!
What books or other reading sources form the basis of your own work?
I think that at a very profound level I’m still very much influenced by Ursula K. Le Guin. I re-read her books periodically because her prose is so powerful and exquisite. I’ve also been very inspired by newer authors like Angela Slatter and Kai Ashante Wilson who both write speculative fiction. Currently, I read a lot of new short fiction in the horror, fantasy, and science fiction genres. There are a ton of great new authors there, and I learn from and am inspired by them all the time. Online zines like Shimmer, Apex, Uncanny Magazine, Gamut, The Dark, Liminal Stories, Beneath Ceaseless Skies, and many others, publish fantastic short fiction all the time, and I try my best to keep up with as much of it as I can.
Do you have a favorite book list? What books are on this list?
The Lord of the Rings is my all-time favorite book. Other favorites (meaning books I’ve read multiple times) are Foucault’s Pendulum by Umberto Eco, The Count of Monte Cristo by Dumas, Le Guin’s Earthsea books, and Tinker Tailor Soldier Spy by John Le Carré. Arthur C. Clarke’s Rendezvous with Rama is another one.
Is there a book or two that has been transformational or inspirational in your life?
For me as a poet, I think that reading T.S. Eliot’s The Waste Land was transformational. I remember reading it in high school and it just changed how I wrote and read and thought about poetry on a fundamental level. The idea that a poem could be that mysterious and that strange and yet be so powerful and moving was a revelation. I’d always liked reading and writing poetry, even before that, but once I read it I fell in love with it, and my writing changed radically after that.
What advice would you give a person who wants to be a writer and/or a poet?
Read a lot, write a lot. That’s really the basics. Read old writers and read the new writers, too. Read every genre that appeals to you. And, once you start writing and submitting your work, realize that you will be rejected. But that’s OK: keep plugging away, and keep honing your skills. Just be persistent, and be willing to learn and work hard. No matter how talented you are, working hard and being persistent is what will help you in the long run. |
failure, question, action, request, effect, behavior, loss, cause.
Synonyms and antonyms for double replacement reaction. 1. reaction (n.) ( chemistry) a process in which one or more substances are changed into others. |
The Nobel Prize is a set of six annual international awards bestowed in several categories by Swedish and Norwegian institutions in recognition of academic, cultural, or scientific advances. The will of the Swedish scientist Alfred Nobel established the prizes in 1895. The prizes in Chemistry, Literature, Peace, Physics, and Physiology or Medicine were first awarded in 1901.
This year, The Nobel Prize in chemistry has been awarded to three scientists who have harnessed the power of evolution to develop biological molecules with useful applications and solve some of the world's worst problems. The 2018 Nobel Laureates in Chemistry have taken control of evolution and used it for purposes that bring the greatest benefit to humankind. Evolution has meant that the world is full of a huge variety of different forms of life because it has allowed organisms to respond to the chemical problems that surround them in their environment. The three scientists who won used those same processes to solve the problems facing humans.
On Tuesday, researchers from the United States, Canada, and France were awarded the physics prize for advances in laser technologies. The medicine prize was awarded Monday to American and Japanese researchers. Scientists from the United States, Canada, and France shared the physics prize on Tuesday. The Nobel Memorial Prize in Economic Sciences, honoring the man who endowed the five Nobel Prizes, will be revealed on Oct. 8. No literature prize will be awarded this year. The winner of the Nobel Peace Prize is to be announced Friday.
The Nobel Prize in chemistry winners: Frances Arnold, George Smith, and Gregory Winter.
The committee chair Claes Gustafsson said "The power of evolution is revealed through the diversity of life, I think, Their work is an extension of selective breeding, which has been practiced by humans for millennia.
The three winners all harnessed the principles that power evolution – genetic change and selection – to create new chemical processes that help cure disease, create new materials and save lives. All three have applied the principles of Darwin in test tubes. Enzymes are proteins made in cells which catalyze chemical reactions, making them work much faster. They have evolved over millions of years, but in 1993, Arnold worked out that you could direct their evolution and make the process happen much faster. She started by taking the gene that codes for an enzyme, then randomly introducing mutations, creating new variants of the enzyme. Then she screened the resulting variants and selected the ones that were most effective at catalyzing the reaction she wanted. The selected variants then went through another round of mutation and selection, and the process was repeated. After three generations, she had an enzyme that was 256 times more efficient than the starting enzyme.
Frances Arnold, based at the California Institute of Technology in the US, developed a way to direct the evolution of enzymes to make them much more effective at catalyzing chemical reactions. Enzymes produced through directed evolution are used to manufacture everything from biofuels to pharmaceuticals. Antibodies evolved using a method called phage display can combat autoimmune diseases and in some cases cure metastatic cancer. She has been awarded half of the prize money and is the fifth woman to win a chemistry Nobel.
The other half of the prize will be shared by George Smith of the University of Missouri and Gregory Winter of the MRC Laboratory of Molecular Biology, in Cambridge UK. They were honored for "phage display of peptides and antibodies.". This pair developed ways to develop therapeutic antibodies, which are now used to treat autoimmune diseases, anthrax, and cancer.
George Smith developed a way to use viruses that infect bacteria, called phages, to evolve new proteins. Winter used this technique to direct the evolution of antibodies – molecules produced by the immune system to recognize and attack pathogens. The first pharmaceutical based on Winter's work was approved for use in 2002 and is employed to treat rheumatoid arthritis, psoriasis, and inflammatory bowel diseases. |
Black and white striped tower. Height 25 metros.
This lighthouse was designed by Emili Pou when the Penjats lighthouse was considered insufficient to guide ships in the Freu Gran, the hazardous straits between Ibiza and Formentera. Although it was originally to have been of the 3rd order, the Lighthouse Commission finally decided that it should be of the 4th order, like the Penjats lighthouse with a fixed white light alternating with 3 red flashes every three minutes in order to distinguish it from the light at La Mola that also had a fixed light pattern . It was opened on the 15th March 1864. The lens was purchased from the firm of Henri Lepaute of Paris for 11,079.45 francs. Soon after its construction the lighthouse started to suffer from its close proximity to the sea. Waves would often sweep through the lighthouse keepers’ cottages, soon causing serious damage, exacerbated by the use of soft local sandstone known as “marés”. In 1897 Pedro Garau proposed that the building be demolished and that separate housing be built on higher ground further away from the shore, using hard-wearing Santanyí stone, famed for its resistance to erosion. At the same time an underground gallery was built connecting the tower to the new dwellings, given that during big storms waves would often wash right over the small islet. This meant that the lighthouse keepers could reach the tower without any risk. However the new buildings didn’t prove to be secure as in 1913 a freak storm tore off the roof and blew down the partitions, leaving only the supporting walls standing. The lighthouse keepers were forced to seek refuge on the neighbouring island of S’Espalmador and the whole structure had to be rebuilt. This was the second lighthouse to be automated after Els Penjats using an acetylene gas powered lighting system lit by a sun valve in 1935. |
Parties, classes, concerts, and town hall meetings all have something in common. They need space. In the not-too-distant past, churches played probably the largest role in providing space for people with common interests to come together and work, plan, discuss, or play. Studies and surveys from Pew Research Center suggest church participation is way down from where it was 50 to 70 years ago.1,2 So all these shared beliefs and systemic understandings become more compartmentalized, even diametrically opposed, and in the age of the internet, the ability to escape to whatever cubby we feel comfortable in is maybe a bit too easy.
If you ask me, with almost no exception, the internet is why the country is divided in this contemporary age. Some blame the Obama administration while others blame the Trump administration, but you look at individual google search histories, and it’s not hard to figure it out. “Why do Democrats hate Freedom?” or “Why do Republicans hate Science?” Yeah, no wonder people think the world is going to hell in a handbasket. The internet allows me to center my life on anything I want; it’s why I’m more prepared for the robot uprising than the average joe.
One last comment before I transition to the point of this article—the internet, without a doubt, carries with it the potential to create the most educated, well-rounded, and capable public the world has ever seen. Through necessitated optimism, I believe we will one day tap into that potential.
So, community spaces. They allow a diverse public to come together, but you’re gonna need space and something to really bring people in. Say…beer does a pretty good job at getting people’s attention. A lot of the stigma that once hovered around bars and breweries has shifted away from the checkered-stained-glass dive that is Moe’s in The Simpsons . In fact, some breweries have embraced the opportunity to be the place in town to go for good food, solid entertainment, and a family-friendly experience.
“I do feel that there’s a resurgence of an attempt to entertain people here [at a brewery] that’s not just strictly based on alcohol. It’s a gathering. It’s an experience rather than just a get-sloshed-and-go-home kind of thing,” says Sam Green, brewer at Octopi Brewing. People have embraced local brewery efforts in community outreach. The plumes of consequential cannonade ignited by beer surveyors, partakers, and purveyors has grown to such that those uninterested in alcohol not only take note, but often participate in the going-ons and shindigs (rabble rousing or no) to have a good-old-fashioned time and meet neighbors old and new.
These events go beyond local music and trivia nights, though those staples still very much exist as part of the scene. “Paint night, the whole idea of it is ‘hey, go be arts and craftsy at this bar or brewery. Come a little early, get a drink, have dinner, stay and do this event, and then potentially stay afterwards,’” says Miranda Ladwig, event coordinator and taproom manager at Octopi Brewing.
From rock concert to knitting class, anything goes, but at the end of the day, the brewery is a brewery. Still, there’s this microcosmic effect. By taking a position of flexibility when planning out the week’s events, the space born from necessity when operating a brewery can be quite reflective of the community that supports it, whether creative, health focused, or simply relaxed.
One of the results of breweries taking themselves to task on the role they play in their communities has been organizations embracing the opportunity to collaborate, including some inspired startups, Om Brewers (ombrewers.com) being one that came up during my discussion with Sam and Miranda. It’s yoga meets beer.
Melissa was drinking alone at LynLake Brewery wondering why she was the only one of her friends not getting engaged. Struck up a conversation with LynLake Brewery’s new marketing director, and Om Brewers was born. She sketched the logo on a napkin, and built the website that night.
With all this collaboration, the really cool thing that happens from time to time is a beer is born. Breweries partner either with each other or local organizations to do something truly unique. Sometimes benefiting the community as a whole while working with nonprofits that tackle everyday problems we’re not always privy to, like homelessness, hunger, underserved veterans, and the continued maintenance and staffing of animal shelters. Being a Sun Prairie resident, one of my favorites of these collaborations is when Potosi Brewing Company created #SunPrairieStrong Pilsner. The profits went toward the Sun Prairie Disaster Relief Fund after last year’s explosion downtown.
The well-being of a community isn’t dependent on one facet of a city or town doing well—it’s measured by the successes of everyone and everything impacted by the actions and decisions of others, be they nonprofits, government, local businesses, or commercial enterprises. Our individual bubbles are part of this massive sphere-shaped Venn Diagram of influence. Each one of those influences is either connected to some sort of hub or plays the role of the hub itself. Some breweries have taken on the role of hub for their communities, and I think that’s not only a cause for celebration, but a great reason to get involved.
As breweries take on a new role in engaging an ever-more-diverse range of people, we as patrons can do a lot to create a welcoming place where physical health can be maintained, mental health can be nurtured, and ideas can percolate and proliferate without leaving anyone feeling defeated. |
Milliken or Milliken Mills is a neighbourhood in the city of Toronto. It is located in the North East section of Scarborough and South East section of Markham. The neighbourhood is centered around Old Kennedy Road (see also Kennedy Road in York Region) and Steeles Avenue, bounded by 14th Ave E to the North, Markham Road to the East, Finch Avenue East to the South and Warden Avenue to the West. The area is heavily residential.
The community is named after Norman Milliken, a United Empire Loyalist from New Brunswick who settled in Markham, Ontario in 1807. Milliken operated a lumber business in the town when it was first called Milliken Corners.
The community's post office was established in 1859 on the Markham side. The Ebenezer United Church (1878) is one of a few structures remaining in the area. The church once stood on the south side of Steeles Avenue with another church on Brimley Road. Plots at the church are many of the early families of Milliken: Thomson, Rennie, Harding, Hood, Hagerman and L'Amoreaux. The church is now located on Brimley Rd, north of Steeles Ave in Markham.
To the Milliken community, municipal boundaries were just lines on a map and the community's history can be found in the Archives of both Scarborough and Markham. School Section # 2 was established here in 1847, and a log school was built during the same year.
The neighbourhood only gave up its final farming activities in the early 1980s and is modernizing by the year. There are green spaces such as Milliken Park, Goldhawk Park and many others, also the Milliken Trail is a walking tour of the neighbourhood.
Once agricultural land, much of which disappeared with residential development beginning in the 1970s and ending near the late 1980s. The Town of Markham has recently initiated an initiative to develop "Main Street Milliken" around Old Kennedy Rd. The area has suffered from years of neglect, and revitalization plans have been welcomed by community members. New developments include a condominium development on Kennedy and Denison, Dairy Queen, Major Milliken Pub and a new housing project on Old Kennedy Rd.
A growing hub of community activity is Milliken Park Community Recreation Centre, which is located on the Northwest corner of Milliken Park and at the Southeast corner of the intersection of Steeles Avenue and McCowan Road. The centre is home to a variety of camps, after-school programs and cultural activities designed to cater to local demand. To provide optimal service for the area, City of Toronto staff are in regular consultation with an advisory board, which includes representatives from local community associations, such as the Goldhawk Community Association, Brimley Forest Community Association, Richmond Park Association, and the Milliken Park Community Association. Annually, the City of Toronto staff, the advisory board, and the community associations organize special events for the communities at Milliken Park Community Recreation Centre. Some examples of these major events include the annual Community Christmas Party, Spring Fling, and Fall Fair.
The demographics of the community is made up of mainly immigrants with a strong Chinese Canadian presence. It is home to one of several Chinatowns in Toronto. For this reason, a new retail commercial condominium project is underway in the area called The Landmark. The Chinese-theme mall will become one of the largest Asian malls in the Greater Toronto Area. In recent years, the South Asian population has also boomed within the region, particularly the Sri Lankan Tamil and Indian Gujarati community. For example, a South Asian movie theatre is now located at the Woodside Square Mall at McCowan and Finch and plans are also in place for a major South Asian mall to be developed at Finch and Middlefield. |
Bit Of Byrd: Labor Day | What did you Celebrate?
Today is actually the day after Labor Day. But how did you choose to spend the holiday? I spent the day with my family and friends.
The thought went through my mind more than once... what is Labor Day all about. I know I am off work, and I am glad for it.... but why?
There are other holidays that also get lost in the mix from time to time but people know what they mean. Veteran's Day, Memorial Day, and President's Day are the first that come to mind.
So what is up with Labor Day?
"It is appropriate, therefore, that the nation pay tribute on Labor Day to the creator of so much of the nation's strength, freedom, and leadership — the American worker."
"On September 5, 1882, 10,000 workers took unpaid time off to march from City Hall to Union Square in New York City, holding the first Labor Day parade in U.S. history. The idea of a “workingmen’s holiday,” celebrated on the first Monday in September, caught on in other industrial centers across the country."
Even though it caught on, Labor Day didn't become a national federal holiday until 1894.
Today we often think of Labor Day as the end of summer, the start of school or just a day to barbecue and enjoy life.
Oddly enough... that is part of what the day was originally designed for. It was designed for those who already work so much and put in those long hours and do those tough jobs... the jobs that keep this country running. It is a day for rest for those people.
I enjoyed my day off, but my trash man still had to go to work... so I hope they get to enjoy another day as their day of rest.
How did you spend your day? I would love to hear about it! |
Whether you are a beginner or an experienced origami sensei, this easy butterfly paper craft is for you! It is designed to be made in a matter of a few minutes and bring joy to whomever you gift it. Take a look what you can do as gift ideas with this butterfly craft.
Easy, doable, graceful paper twist crafts to the rescue! Without a lot of preparation, you can make one of the best gifts of your life. No kidding. Flower Twisting Craft skills will be needed! But don’t you fret – I will give you a step-by-step instructions with pictures that are self-explanatory and are very helpful.
This craft is for beginners as well as for anybody who would like to make this accordion paper butterfly a part of a bigger project: a card, a festive collage, picture frame decor or even a home decoration item. Being a DIY doesn’t mean cheap looking. You have the power to use color paper and embellishments like sequence, threads and fabric to take your butterfly paper craft to a new level.
If you are short for time to get a gift – go after this gorgeous paper butterfly craft to make someone’s day!
HOW TO Make This Easy Butterfly Paper Craft?
Any square piece of paper will do! Even newspaper! Depending on how big you want this butterfly, choose the square size. Just make sure it is square, as it is much easier to balance the proportions of the butterfly’s wings.
Here are Your Upper Butterfly Wings!
STEP 12 – Bring Upper & Low Butterfly Wings Together!
Use a color twine to keep both wing parts together.
After you use a thread to bring the wings together, leave both ends of your thread long enough to serve as butterfly’s antennae.
Sweet Tooth Or Sour, You Are A Winner Enjoying These Candy!
This would be such a great spring project for my first grade classroom.
What a cute idea. My kids would love to make these.
This looks like it will be easy enough for my granddaughters to do.
These are super cute and easy to make.
My grandmother will enjoy this. She will make one for every child that goes to her Sunday school class.
I love butterflies and these make great decorations.
These are really cute. Love butterflies!
these are so cute! Great for spring.
I am going to have my students create these in May… thanks for the easy instructions.
My girls would have fun making these. This would be a perfect rainy day activity.
This is one of my favorite crafts for my own self. I always do some paper folding when I have a few minutes of downtime. It really helps me to take my mind off and relax.
My niece loves butterflies!! This would be a great craft to make with her!
I love these little butterflies. I can put them on cards, gifts and so many things. I can’t wait to share with my Granddaughter, she going to love this craft. Thank you so much for sharing.
My mother wants to make these out of some left over wrapping paper! These could be a great way to downsize and reuse some kids artwork or writing!
This is such an easy and cute craft. These would be adorable to use on gifts!
I showed this to my mom today. She loves it as well. My grandma makes stuff like this alot with my daughter.
What a sweet craft! Butterflies are one of my favorite things to share. This looks so easy and my girls would enjoy it!
(Paper Butterfly As A Stand-Alone Craft) Wow! These do look very nice to learn how to make for decorating gifts for my upcoming three girlfriends birthdays. They are really pretty.
I am going to make a ton of colorful butterflys and put them on a mobile to hang and float!
I absolutely love butterflies they are one of my favorite things I screenshot this whole tutorial! I want to make lots of these for homemade cards and gifts and for my son’s lunch! Thank you so much for the lesson!
These butterflys would be cute to make a paper bouquet from as well! What a cute, easy craft for all!
I love butterflies and this is crazy simple! I love it! It would be a beautiful embellishment on cards.
i love this! will use this for room decorations!
These paper butterflys look so easy to make! I love that you can re-use papers from home and decorate as well!
Those are lovely. Very good instructions.
I love this Butterfly Paper Craft Tutorial. My daughter would have so much fun making these as she loves doing crafts.
That is such a cool craft. I will definitely be trying it for my next holiday.
What a cute craft! I love how you could easily personalize it with different types and sizes of paper-think how pretty a floral print would be!
What a great idea this is. I can think of so many things to decorate these with and I know everyone would love these.
This looks so easy to make. The butterflies are cute and not as complicated as Origami.
This is a really cute craft idea!
This craft is really cool for the brain as well! As we age, we partially lose the connection between our nerve endings at the tips of our fingers. To keep this connection alive and thriving, do things like sewing and crafting that involves the tips of the fingers. The more you do it, the better off the brain and nerve functions stay coordinated.
This craft looks so useful! That’s a big problem with some crafts. What to do with them when they are complete.
Thanks for sharing this wonderful idea. My niece would enjoy doing this.
These are so cute! I can do them with my niece!
These are too cute! I would love to make these with my nieces, it would be a lot of fun.
These are cute! I would like to make these.
These butterflies are really cute and would be fun to make. |
There is a large market demand for new drugs. The existing chronic or common ailments without cures, development of new diseases with unknown causes, and the widespread existence of antibiotic-resistant pathogens, have driven this field of research further by looking at all potential sources of natural products. To date, microbes have made a significant contribution to the health and well-being of people globally. The discoveries of useful metabolites produced by microbes have resulted in a significant proportion of pharmaceutical products in today’s market. Therefore, the investigation and identification of bioactive compound(s) producing microbes is always of great interest to researchers.
Actinobacteria are one of the most important and efficient groups of natural metabolite producers. Among the numerous genera, Streptomyces have been recognized as prolific producers of useful natural compounds, as they provide more than half of the naturally-occurring antibiotics isolated to-date and continue to emerge as the primary source of new bioactive compounds. Certainly, these potentials have attracted ample research interest and a wide range of biological activities have been subsequently screened by researchers with the utilization of different In vitro and In vivo model of experiments. Literature evidence has shown that a significant number of interesting compounds produced by Actinobacteria were exhibiting either strong anticancer or neuroprotective activity. The further in depth studies have then established the modulation of apoptotic pathway was involved in those observed bioactivities. These findings indirectly prove the biopharmaceutical potential possessed by Actinobacteria and at the same time substantiate the importance of diverse pharmaceutical evaluations on Actinobacteria. In fact, many novel compounds discovered from Actinobacteria with strong potential in clinical applications have been developed into new drugs by pharmaceutical companies. Together with the advancement in science and technology, it is predicted that there would be an expedition in discoveries of new bioactive compounds producing Actinobacteria from various sources, including soil and marine sources. In light of these current needs, and great interest in the scope of this research, this Research Topic seeks contributions on the investigation of different biological active compound(s) producing actinobacteria which are exhibiting antimicrobial, antioxidant, neuroprotective, anticancer activities and similar. |
Hatred in America is an inch deep and a mile wide. This is evident in that the Confederate Battle Flag on the SC Capital grounds is coming down, to the glee of some and to the ire of others. Division is eroding our country, and one need look no further than the ridiculous ire and glee are based largely on when people came to America, and how they got here. To this day, the world has not eradicated slavery, but in America we have; at least legal slavery. No one today has ever been or owned, or even met a legally held American slave. Still, a segment of our society believes there is profit to be made by stoking anger about a condition that ended 150-years ago.
Oprah Winfrey famously told BBC’s Will Gompertz, “There are still generations of people, older people, who were born and bred and marinated in it – in that prejudice and racism – and they just have to die.” If you were born the day of the 1964 Civil Rights Act, you would be almost 51 today (7-2-1964). Jim Crow segregation became illegal, and was eliminated nation-wide as the 1960’s grew to a close. If you look at the people present at the signing ceremony, there is scarcely a face under 40 in the room. Those 40+ year olds would be 91, and it is highly unlikely that they are vandalizing neighborhoods still. KKK gets painted in graffiti, billboards featuring black people get painted white-faced with graffiti, and opportunities are denied to people, even today, because of skin color. Something is keeping racism against African Americans alive today.
There is an undercurrent in the African American culture that expresses a desire to retaliate against white people, today. The “knockout game”, the Baltimore and Ferguson Riots, and enumerable other events illustrate the pent up hatred. The Hip-Hop music industry makes tremendous money preying on a young people by reinforcing anger that comes from life with insufficient resources. There are entire departments in universities dedicated to indoctrinating young people to scour the human landscape of America looking for a reason to cry “racism”. Indeed, there is an industry that is famous for stoking the fires of division along racial lines, led by Al Sharpton and others. There is an obvious profit motive in inciting rage between races that comes out of the black community.
Despite criticism of the description, America is well described as a melting pot. Like a pot, we have a group of people mixing together, and as we get to America we tend to stay in one place. Much of our nation has grown stagnant with inactivity and content to get by with whatever comes down the path of least resistance. Perhaps our focus on hate and division is intensified because we have lost some of our vision. Biblically, we’re called to work for our sustenance, and love one another. As we depart from a nation on this mission, governed by a common ethical code, we have turned inward and against each other.
America has a history of struggling with being a melting pot. With the exception of the Native Americans, everyone else immigrated somehow, at some time. Certainly no one alive today can claim they got here first. No one race can claim superiority over another, because in 2015’s version of America, people from every race can be found at every socioeconomic level. People of all races are doing all types of work. 51-years after the Civil Rights Act was signed, human resources professionals work diligently to make sure racial and ethnic diversity is represented in the workplace.
For a struggling melting pot, America still leads the world by most metrics. For all of the progress America has made, we’re still America divided. We have a news industry that is split right down the middle of our political landscape. Bill O’Reilly dominates cable news by taking on every opponent that has a political axe to grind against conservative white people. Breitbart, the Daily Caller, the Liberty News and other conservative websites daily lock horns with the Huffington Post, the Daily Beast, and the Daily Kos. Melissa Harris Perry, Don Lemon, Sean Hannity and Glenn Beck have all made tremendous names for themselves “punditizing” the news to divided audiences. We have a profitable media landscape that is set up to cater to the political appetites for each segment of our divided America.
Putting aside our differences as a nation, America is still the greatest team that humanity has ever fielded to establish and carry out the business of being a nation. None even come close. Out of what was raw wilderness, we’ve built the greatest cities, improved the world’s education and blazed most of the trails in science and medicine. Our diversity has given us our first African American president, with another African American, already a world-renown brain surgeon, contending for the job as of this writing. Despite struggling with a divided past, our past represents the best humanity has ever done. Our diversity has everything to do with our success as a nation. The best, brightest and most driven of us have achieved amazing heights. The reason we have such a tremendous immigration problem is precisely because of the greatness of our nation. Our problem is too many people want to come, not too many want to leave. We are blessed as a people, because when adversity has come, our character has been revealed. As a nation of 319-million people on a globe of 6.8 billion people, our blessings are as numerous as the stars, and perhaps we are coming into a phase where our blessings are threatened.
As a nation, the further away we get from bedrock principals of our faith, the further divided we will get as a nation. Many who come here will not be Christian, but will still share values that harmonize with Christianity. If we are to reduce the hate and grow the great, the thing that unites us will have to have a freedom to live and strive for our blessings. The thing that overcomes our division is love, and love comes from God. |
Subsets and Splits