text
stringlengths
188
632k
Search and compare thousands of words and phrases in British Sign Language (BSL). The largest collection online. How to sign: one of four equal parts "a quarter of a pound"; Your browser does not support HTML5 video. Similiar / Same: one-fourth, quarter, fourth part, twenty-five percent, fourth Categories: common fraction, simple fraction Upload your sign now. Add this video to your website by copying the code below.
With developments in the type and frequency of terrorist attacks against the aviation industry, airport security measures have had to evolve in order to protect against threats posed both by passengers with malicious intent as well as by the insider threat. However, aircraft design also has a significant role to play in foiling inflight attacks and, in several incidents, the aircraft assembly has proven itself robust enough to withstand an inflight explosion, allowing the pilot to land safely. Shalini Levens discusses what aircraft manufacturers are doing to make aircraft more resistant both to attempted bombings and to the emerging threat of cyberattack. When a Somali military court convicted ten suspects for organising this February’s bomb attack on Daallo Airlines, the reality of the potential for further significant attacks on the industry became apparent. The explosion on the Daallo Airlines Airbus 321 flight occurred around 15 minutes after take-off from Mogadishu, when the plane was at approximately 11,000ft (3,350m). Only the bomb carrier was killed and the pilot was able to make an emergency landing back at Mogadishu airport, aborting the flight to Djibouti. Somalia’s militant Islamist group al-Shabab took responsibility for the attack, later admitting that it had failed to bring down the aircraft since the aircraft assembly had withstood the blast. This incident is one of the few examples of aircraft resilience to terrorist attacks after bombs have made it on board and detonated. Another example of an aircraft withstanding an explosion during its journey was the Trans World Airlines (TWA) flight 840 from Rome to Athens in 1986 that exploded 20 minutes before landing due to the detonation of an improvised explosive device (IED) concealed under a passenger’s seat. The blast created a hole in the aircraft’s starboard side, similar in nature to that of Daallo Airlines, killing four passengers who were ejected through the hole in the fuselage. Similarly, in 1994, Philippine Airlines flight 434 operating from Manila to Tokyo via Cebu became a victim of an IED that was placed in a lifejacket under the seat. The explosion killed one passenger while others were injured. The aircraft itself remained intact and the pilot was able to safely land the damaged plane in Okinawa. The ongoing investigation (at the time of writing) into the loss of EgyptAir flight MS804, en route from Paris to Cairo, has not yet ruled out the possibility that a bomb on board was responsible. Indeed, quite the opposite. With much speculation that the aircraft was the subject of a terrorist bombing, and coming so soon after the Metrojet bombing in Egypt in October 2015 and the Daallo Airlines incident this year, the questions being asked not only focus on what the authorities can do to prevent a bomb being loaded on board, but on what manufacturers can do to ensure that, should a device make it through the security system, damage caused will not be catastrophic. Aircraft manufacturers are enhancing security in aircraft design via a number of methods. A few of these methods include: aircraft hardening against inflight explosion; assessing and enhancing Least Risk Bomb Locations (LRBL); the provision of systems to prevent hacking and cyber-attacks; flight deck door construction; secondary barriers; the development of systems which limit control of the aircraft to authorised persons, and; systems which might indicate the presence of a stowaway on board. Aircraft hardening against inflight explosion has become a trending topic of interest since the loss of Metrojet flight 9268, twenty-three minutes after its departure from Sharm el-Sheikh bound for St. Petersburg. Even though early speculations concluded that Islamic State (IS) was responsible for the attack – and the group even claimed, in Dabiq (its own publication), that a rather crude device utilising a soft drinks can had contained the deadly charge – there were still sources that maintained that the aircraft had been in poor mechanical condition. Another source brought to light the fact that the engines had start failures. The airline denied claims that the aircraft was not in perfect working condition. Nonetheless, all the indicators suggest that the inflight explosion was caused by an improvised explosive device infiltrated on board, and probably by an insider working at the airport.
A version of this story originally appeared on NoCamels – Israeli Environment News By Elana Widmann Despite having just 4.59 percent of the world’s population, the U.S. consumes 25 percent of the all the energy produced on earth. That’s a lot of energy and it’s very expensive. But replacing electric and utility systems in buildings to make them more energy- and cost-efficient can be a hassle. Israeli technology company BEEMTech markets an easy-to-install lighting control and energy efficiency management system that enables a reduction in customers’ energy expenditure and carbon footprint. BEEMTech was founded in 2009 when CEO Nati Frieberg and other private investors purchased the intellectual property of an existing smaller technology company. “We recruited the whole team, but decided to take BEEMTech down a new road, make a new operation,” Frieberg tells NoCamels. Today, BEEMTech is a “clean technology” company that develops, manufactures and markets its energy management solutions. To cut costs and energy use, BEEMTech customers install the company’s LightBEEM system, a lighting control and energy management system that allows for facility-wide control of electricity. The company claims its LightBEEM system cuts lighting energy costs by up to 75 percent (NoCamels could not independently verify this claim.) The LightBEEM smart sensors are installed every 15-20 square-meters within a building and can tell if there is human presence in the room; what the temperature is; whether there is sufficient natural light and Infrared levels. Soon, the sensors will also be able to determine humidity and CO2 levels, says Frieberg. A large building with the LightBEEM system could have about 600 sensors total, he explains. “The idea is to try to do as little as possible. Installing our system causes no noise, and no opening of the floors or ceilings. We can install our system in a working building,” Frieberg explains. On average, the cost of installing the BEEMTech LightBEEM system is about $10-15 per square meter. “We do not change the existing HVAC (heating, ventilation, and air conditioning) system. The idea is to use the existing equipment and to use it in a smarter way,” Frieberg tells NoCamels. Controlling the system over existing infrastructure The LightBEEM system is controlled by both BEEMTech’s central command server and by the people in the commercial building. The BEEMtech system is able to control each thermostat within a building without adding new wires. “This is one of our innovations and strengths as we use our own proprietary technology to communicate over the existing power lines,” says Frieberg. The LightBEEM system comes with a remote-control Smartphone application, so that people in the building can adjust the light and temperature of their specific area of the building. “Each individual device of the HVAC can be controlled. We centrally control the whole building, and correlate with user demand. For example, if a space is unoccupied we will switch off or dim the lights, and change temperature levels,” Frieberg notes. Besides real-time information about the electricity in a building, BEEMTech also provides its customers with more in-depth information that they need to understand their energy and cost savings. Frieberg explains: “Our vision at BEEMTech is energy efficiency, light saving, and HVAC savings. Then we thought along with this, we could also provide useful reports on these savings for our customers.” The company provides its customers with reports daily, weekly, or monthly. “We summarize the information that enters our database, and then send it to our customers via email” he says. “It is our mission to turn every dumb building into a smart building,” Frieberg pronounces. “Pays for itself within a year and a half” Frieberg claims that customers could recover the money spent on their investment to install the BEEMTech system within 18 months. “Per year, we can save 25-40 percent on the entire buildings electricity cost and consumption,” he says. The company commercialized only three months ago, and is currently installed in two locations in Israel, and one in the U.S. Frieberg says another five to seven installations will occur in the next fiscal quarter. The company says it is now approaching large facility management companies in Israel in order to get the LightBEEM system into more buildings. BEEMTech is also looking to grow its market in South Africa and Europe. CEO Nati Frieberg has over 20 years of experience in various global markets. He served as COO of CellGuide Ltd., a navigation and design solutions company, the Director of Sales and Marketing at Radcom Ltd., a service assurance company, and the Product Marketing Manager at ECI Telecom Ltd., a network management system company. Frieberg holds an MBA in Finance and Financial Engineering from Hebrew University and an Electronic Engineering Degree from the Air Force Academy. BEEMTech has raised $13 million from private investors thus far, and is continuing to look for capital. The company has offices in New York City and in Israel. Photo by SplaTT
Eating right doesn’t have to be complicated. Before you eat, think about what goes on your plate or in your bowl. Choose foods that provide the nutrients you need without too many calories. Build your healthy plate with foods like vegetables, fruits, whole grains, low-fat dairy and lean protein foods. Try these eating right tips. Eating right for older adults. Food, nutrition and health tips from the Academy of Nutrition and Dietetics Make half your plate fruits and vegetables Eat a variety of vegetables, especially dark green, red and orange vegetables plus beans and peas. Add fruits to meals and snacks. Make at least half your grains whole Choose 100% whole-grain breads, cereals, crackers, pasta and brown rice. Also, look for fiber-rich cereals to help stay regular. Switch to fat-free or low-fat milk, yogurt and cheese Older adults need more calcium and vitamin D to help keep bones healthy. Include three servings of fat-free or low-fat milk, yogurt or cheese each day. Vary your protein choices Eat a variety of foods from the protein food group each week, such as seafood, nuts, and beans and peas, as well as lean meat, poultry and eggs. Cut back on sodium and empty calories from solid fats and added sugars Look out for salt (sodium) in foods you buy. Add spices or herbs to season food without adding salt. Drink water instead of sugary drinks. Select fruit for dessert. Eat sugary desserts less often. Enjoy your food but eat less Most older adults need fewer calories than in younger years. Avoid oversized portions. Try using a smaller plate, bowl and glass. Be physically active your way Pick activities that you like and start by doing what you can. Every bit adds up and health benefits increase as you spend more time being active. Authored by Academy of Nutrition and Dietetics staff registered dietitian nutritionists. Sources: U.S. Department of Health and Human Services, ADA Complete Food & Nutrition Guide. Download Nutritional Tips
POST-OP INSTRUCTIONS: Tonsillectomy and adenoidectomy Most of these instructions relate to patients who have had a tonsillectomy. Patients undergoing an adenoidectomy generally have fewer problems with post-operative pain and swallowing due to the location of the operative site high in the throat. 1. LIMIT ACTIVITY. Your child should not return to normal vigorous play or to school for approximately one week following the operative procedure. Keep in mind that your child will likely not feel completely recovered for at least ten days to two weeks after a tonsillectomy. Your child can pursue quiet activity at home and need not stay in bed. 2. YOUR CHILD'S DIET. Your child may eat ANYTHING as long it is not dry and crusty like pizza, hard pretzels or nachos. Just be certain that he or she is eating and drinking. Stock up on favorite foods. Dairy products such as ice cream, milk shakes, and yogurt are fine. By the end of a ten day to two week period, your child should be back to a completely normal diet. 3. PUSH FLUIDS. I cannot stress strongly enough that fluid intake must be maintained. This will prevent your child from becoming dehydrated. Dryness will slow the healing of the throat and make swallowing even more uncomfortable. Inadequate fluid intake is also the most common cause of a fever occurring once your child goes home. Plan to give your child Tylenol™ four times a day for the first four to seven days to minimize throat pain. I will recommend a dosage based on your child's weight. I will also prescribe a narcotic pain killer. Use this medication as seldom as possible as narcotics tend to create abdominal cramps and constipation. Be aware that throat pain often refers to the ears. For this reason, it is common for persons to develop pain in one or both ears which intensifies on swallowing. Such pain during involuntary swallowing at night may awaken your child from sleep. Be assured that in most cases patients experiencing such sensations do not have ear infections. If, however, the pain is intense and unrelenting, please notify us. 5. NO MOTRIN™, ADVIL™, OR IBUPROFEN CONTAINING PRODUCTS. Ibuprofen containing products such as Motrin™ and Advil™, like aspirin, will interfere with your child's blood clotting capacity. This could lead to uncontrollable bleeding during the post-operative period. Read the labels of any medications that your child is taking to be certain that they do not contain any of these medications. 6. IN THE EVENT OF BLEEDING. If you notice any evidence of bleeding, PLEASE NOTIFY ME BY CALLING MY OFFICE. In most cases I will ask you to bring your child immediately to the hospital emergency room. I or one of my associates will meet you and your child for an emergency examination. In children, slight oozing will occur from the tonsil beds as the scab falls off at about the sixth through the ninth days. This type of bleeding is very slight and is usually self-limited. If it occurs, give your child a small glass of ice water to drink and wash out the throat. If the bleeding persists, contact me. 7. POST-OPERATIVE EVALUATION. I will routinely schedule several post-operative visits during the first two to four weeks following the operation. At these visits, I will assess your child's general progress as well as the degree of healing within the throat. An additional visit may be necessary at about 8 to 10 weeks following the surgery to be certain that the throat is completely healed. 8. GOOD COMMUNICATIONS. Good communications are important and parents are the best judges of how their child is progressing after surgery. If you feel that your child's post-surgical course is worrisome, please contact me as soon as possible. Write down your questions or concerns in order to be certain to ask me about them.
Loch Leven Castle and Mary Queen of Scots Mary Queen of Scots is one of the most fascinating individuals in Scottish history. Youthful and intelligent she was the only legitimate daughter of King James V of Scotland and therefore inherited the Scottish throne from him. She achieved this at the tender age of just 6 days old in 1542. Scotland has had a bit of a turbulent history and this period was no exemption. Religious divide, strained relations with England, perpetual conflict at every opportunity and the poor girl is in charge of things before even her first birthday! Stressful times indeed. Needless to say off Mary went to France to escape the madness in Scotland, only returning in 1561 to take up the role of Queen. It was to be a predictably difficult task, largely due to several damaging marriages that endangered the sensitive Catholic-Protestant divide. Mary was to serve only six years in power. This is my personal favourite part of the story. Mary was imprisoned in Loch Leven Castle, classically situated on an island on the Loch in 1567. The castle is one of the better maintained ones remaining in Scotland and its placement on the island is hugely impressive. Although not regarded as a prisoner in the classic sense (chains, bread and water, torture and the like) she was ‘comfortably trapped’ here until the May of 1568. She was also forced to abdicate her power to her infant son during this time, whom she had not seen since his birth the previous year in Edinburgh Castle. Like any great story she was to escape from her island hideaway and rallied an army to take on those who would dare to imprison the Queen. Defeat followed and she fled to England seeking help from her cousin Elizabeth, Queen of England. It was to prove an unwise gamble as Elizabeth had her imprisoned again south of the border and eventually ordered her to be killed. She was beheaded in February 1587. There are many significant historical treasures still standing today in Scotland for those wishing to follow in the footsteps of the controversial Mary but Loch Leven Castle is amongst the best. Still not discovered by the tourist masses and requiring a delightful boat trip to access it, it is a perfect spot to quietly reflect on such a fascinating part of Scotland’s history. You can just imagine the thoughts of Mary as she grieved for her failed marriages and her miscarried twin children (which occured during her year-long stay in the remote castle). Of course she also had the small matter of plotting her revenge on those who had turned on her.
24.07.2014, Research news The immune system has evolved to recognize and respond to threats to health, and to provide life-long memory that prevents recurrent disease. A detailed understanding of the mechanism underlying immunologic memory, however, has remained elusive. Since 2001, various lines of research have converged to support the hypothesis that the persistence of immune memory arises from a reservoir of immune cells with stem-cell-like potential. Until now, there was no conclusive evidence, largely because experiments could only be carried out on populations of cells. This first strict test of the stem cell hypothesis of immune memory was based on mapping the fates of individual T cells and their descendants over several generations. That experimental capability was developed through a long-term collaboration, focused on clinical cell processing and purification, between researchers based in Munich and Seattle. Since 2009, the groups of Prof. Dirk Busch at the Technische Universität München (TUM) and Prof. Stanley Riddell at the Fred Hutchinson Cancer Research Center have combined their technological and clinical expertise under the auspices of the TUM Institute for Advanced Study. The University of Heidelberg, the University of Düsseldorf, the Helmholtz Center Munich, the German Cancer Research Center (DKFZ), and the National Center for Infection Research (DZIF) also contributed to the present study. Homing in on the "stemness" of T cells After generating an immune response in laboratory animals, TUM researchers Patricia Graef and Veit Buchholz separated complex "killer" T cell populations enlisted to fight the immediate or recurring infection. Within these cell populations, they then identified subgroups and proceeded with a series of single-cell adoptive transfer experiments, in which the aftermath of immune responses could be analyzed in detail. Here the ability to identify and characterize the descendants of individual T cells through several generations was crucial. The researchers first established that a high potential for expansion and differentiation in a defined subpopulation, called "central memory T cells," does not depend exclusively on any special source such as bone marrow, lymph nodes, or spleen. This supported but did not yet prove the idea that certain central memory T cells are, effectively, adult stem cells. Further experiments, using and comparing both memory T cells and so-called naive T cells – that is, mature immune cells that have not yet encountered their antigen – enabled the scientists to home in on stem-cell-like characteristics and eliminate other possible explanations. Step by step, the results strengthened the case that the persistence of immune memory depends on the "stemness" of the subpopulation of T cells termed central memory T cells: Individual central memory T cells proved to be "multipotent," meaning that they can generate diverse types of offspring to fight an infection and to remember the antagonist. Further, these individual T cells self-renew into secondary memory T cells that are, again, multipotent at the single-cell level. And finally, individual descendants of secondary memory T cells are capable of fully restoring the capacity for a normal immune response. Insights with clinical potential One implication is that future immune-based therapies for cancers and other diseases might get effective results from adoptive transfer of small numbers of individual T cells. "In principle, one individual T cell can be enough to transfer effective and long-lasting protective immunity for a defined pathogen or tumor antigen to a patient," says Prof. Dirk Busch, director of the Institute for Medicial Microbiology, Immunology and Hygiene at TUM. "Isn't that astonishing?" "These results are extremely exciting and come at a time when immunotherapy is moving into the mainstream as a treatment for cancer and other diseases," says Prof. Stanley Riddell of the Fred Hutchinson Cancer Research Center and the University of Washington. "The results provide strong experimental support for the concept that the efficacy and durability of T cell immunotherapy for infections and cancer may be improved by utilizing specific T cell subsets." This research was supported by the German Research Foundation (DFG) through SFB TR36 (TP-B10/13) and SFB 1054 (TP-B09); by the Initiative and Networking Fund of the Helmholtz Association within the Helmholtz Alliance on Immunotherapy of Cancer; by the Federal Ministry of Education and Research (BMBF) through the e:Bio program (T-Sys); and by the U.S. National Science Foundation under Grant No. NSF PHY11-25915. "Serial transfer of single cell-derived immunocompetence reveals stemness of CD8+ central memory T cells," Patricia Graef, Veit R. Buchholz, Christian Stemberger, Michael Flossdorf, Lynette Henkel, Matthias Schiemann, Ingo Drexler, Thomas Höfer, Stanley R. Riddell, and Dirk H. Busch. Immunity, Vol. 41, Issue 1, July 17, 2014. Prof. Dr. Dirk H. Busch Institute for Medical Microbiology, Immunology and Hygiene Technische Universität München Tel: +49 89 4140 4120
Oilfield workers who are exposed to airborne silica dust created during a drilling process known as “fracking” may be at risk of respiratory diseases such as silicosis or lung cancer. Workers who have been diagnosed with illnesses caused by exposure to dust from silica sand may be eligible to file a lawsuit against the companies that caused their exposure. Hydraulic fracturing—or “fracking”—is a drilling technique that allows oil and natural gas to be extracted from shale and other rock formations. After a well has been drilled, large amounts of water, sand, and chemicals are pumped under the ground, causing cracks to form in the rock deep below the earth’s surface. The sand that is pumped underground with the fracking fluids holds these fissures open, allowing oil or natural gas to flow freely to the well’s surface. According to warnings by the Occupational Safety and Health Administration (OSHA) and the National Institute of Occupational Safety and Health (NIOSH), silica dust created by the machines at fracking sites can cause lung tissue damage among workers who breathe in the dust. Workers who are exposed to airborne silica face an increased risk of developing lung cancer and silicosis—a lung disease that causes scarring in the lungs due to the presence of silica particles. Air samples collected by NIOSH at numerous fracking sites found that levels of airborne silica dust exceeded federal safety standards nearly 80% of the time. Despite the health hazards posed by these high levels of silica dust, regulators have found that many drilling companies fail to take adequate precautions to minimize worker exposure to silica dust, or to provide workers with respirators or other safety equipment that would protect them from inhaling airborne silica. It usually takes 20 years or more for individuals who were exposed to silica dust to develop the first symptoms of silicosis. However, many oilfield workers who were exposed to silica dust at fracking sites have been diagnosed with silicosis, lung cancer, or other respiratory diseases in as little as five years after exposure due to the high volume of airborne silica found at these sites. Oilfield workers who have been diagnosed with silicosis or lung cancer have filed lawsuits against the drilling companies and oil corporations that operated the fracking sites where they worked. These lung cancer and silicosis lawsuits have alleged that some oil and drilling companies failed to use safe drilling practices and safety equipment that would have protected their workers from dangerous levels of exposure to silica dust. If you or a loved one worked at a fracking site—or if you lived in an area where fracking took place—and you have been diagnosed with a serious lung condition such as cancer or silicosis, you may be eligible to file a fracking lawsuit and receive compensation for your injuries. For a free legal consultation from an attorney, contact the lawyers at Heygood, Orr & Pearson by calling our toll-free hotline at 1-877-446-9001, or by filling out the free case evaluation form located on this page.
Divorce (or the dissolution of marriage) is the final termination of a marital union, canceling the legal duties and responsibilities of marriage and dissolving the bonds of matrimony between the parties . Divorce laws vary considerably around the world, but in most countries it requires the sanction of a court or other authority in a legal process. The legal process of divorce may also involve issues of alimony (spousal support), child custody, child support, distribution of property, and division of debt. Where monogamy is law, divorce allows each former partner to marry another; where polygyny is legal but polyandry is not, divorce allows the woman to marry another. Between 1971 and 2011, several countries legalized divorce, the last one being Malta in 2011. The majority Catholic Philippines is the last officially secular country that does not have civil divorce for the whole population; Muslims, however, are granted... 116 people are following the Interest 'Divorce'. You can Follow these people for contributions on other interesting topics as well.
A small animal vet treats pets that are considered companion animals, such as dogs and cats. This also includes rabbits, birds and other small animals that people keep as pets. Small animal veterinarians administer primary care to companion animals, treating many different illnesses and injuries. This type of work includes performing surgery, Prescribing medication and euthanizing animals also fall under the job responsibilities of a small animal vet. Many of these veterinarians become very familiar with pets and their owners through routine visits. That can provide a more personal aspect to the job as small animal vets are counted upon by pet owners. Small animal vets often take the time to educate pet owners on preventative care, which typically strengthens their level of trust. Small animal vets perform surgery when needed and also attend to wounds and broken bones. They typically work out of private veterinary practices and animal hospitals. That setting allows them to administer vaccinations, and provide all different facets of clinical care. The use of x-ray machines and ultrasound are just a couple ways that small animal vets provide care. Small animal vets can also find work at clinics, animal health companies or laboratories. There are even small animal veterinarian research positions that provide full-time work. Small animal vets are at the very core of the veterinary field, particularly since pets are very near and dear to their owners. Because of that, small animal vets have become an important part of pet owner’s life. This allows small animal vets to communicate effectively with pet owners while providing care. It also demands a broad knowledge of the veterinary field as vets could wind up treating a host of different small animals. Meanwhile, successful small animal vets are able to develop a good rapport with pet owners as it this line of work also has a personal element to it. Small Animal Vet Education Requirements An education towards becoming a small animal vet starts with earning a Bachelor’s Degree. A Bachelor’s Degree in animal science is recommended as there needs to be a solid foundation of knowledge in the sciences. Next comes the process of applying to and getting accepted to an accredited veterinary college. This generally requires a certain amount of work hours in some kind of veterinary capacity. Getting accepted into a veterinary college becomes increasingly difficult without any prior work experience. Therefore, veterinary college applicants should account for some type of work experience related to the veterinary field. Once accepted into a veterinary college, students must complete a rigorous four-year program. Small animal veterinarians are the most common, which means veterinary colleges gear many of their classes towards this discipline. Students learn a combination of disease diagnosis, treatment and prevention and some curriculums even teach some business concepts which benefits veterinarians looking to establish their own practice. The final year of veterinary college involves a clinical rotation in which students become immersed in a more hands-on type of education at a veterinary hospital or animal healthcare facility. This can be a very fast-paced environment which offers a real-world look into what it is like to be an actual small animal veterinarian. The completion of all the coursework in that four-year curriculum results in the awarding of a Doctor of Veterinary Medicine, also known as a D.V.M. or V.M.D. While that completes the educational process, many graduating students take on a yearlong internship before acquiring a full-time position as a small animal vet. Those internships traditionally take place at a small animal veterinary practice or within an animal hospital. The number of accredited veterinary colleges in the U.S. now totals 30 in number. In-state tuition fees are typically less than half of the tuition fees for out-of-state students. The in-state tuition fees are roughly $22,448 per year. Small Animal Vet Salary and Job Outlook Small animal veterinarians represent the highest population of practicing veterinarians in the United States. As of 2016, small animal vets represented 76% of the total number of veterinarians within the United States, according to the American Medical Veterinary Association. The average starting salary for small animal vets stands at $66,469. However, the average salary of working small animal vets exceeds $88,000 per year. The sum is a bit lower for entry-level small animal vets, mainly because some of them take on paid internships that have low salaries. Small animal veterinarians with experience can expect to earn an average of $100,560 per year, which is the mean wage reported by the U.S. Bureau of Labor Statistics. Meanwhile, veterinarians in the 90th percentile earn an average salary of $161,070. There are numerous factors that contribute to that annual salary, which include location, type of establishment and years of experience. Metropolitan areas typically have the highest pay rates for small animal vets. Hawaii has the highest average salary for veterinarians, checking in at $201,250. Salaries are not as high on the mainland, although next in line are the states of New Jersey, New York, Nevada and Massachusetts as they feature average salaries ranging from $120,140 to $128,190 annually. With the growing number of pet owners in the United States, there is an ongoing demand for small animal veterinarians. There are more than 160 million dog and cat owners within the U.S. and that requires an extensive amount of primary care. As a result, the growing To become a practicing small animal veterinarian, a license is required. The licensing process is controlled by each state’s Veterinary Board. While requirements differ from state to state, the format is relatively the same. States will require a passing score on a comprehensive exam issued by the Veterinary Board. Some states use the North American Veterinary Licensing Examination, which is comprised of 360 questions and generally lasts eight hours. There are also some states that issue a State jurisprudence examination, which covers all of the regulations and laws within that respective state. State licenses also have to be renewed after a certain period of time and that comes with a fee. Meanwhile, there are no specialty certifications need to practice as a small animal vet. However, small animal vets looking to add a specialization must become board certified after taking those proper training measures. There are no board certifications provided to small animal vets, as a license will be sufficient enough to practice in their respective state. Board certifications require additional years of residencies. The American Veterinary Medical Association plays a prominent role in the veterinary field and is a source of valuable information and resources. This organization is made up of a variety of veterinary associations from all over the globe, offering an abundance of insight and resources. The ICVA offers standard exams, which can be used by various states when issuing licenses to small animal vets. This association is responsible for accrediting animal hospitals across the United States, ensuring they meet industry standards.
When you delete a file from your computer and then empty your computer's recycling bin, it seems as if there’s no way to recover that file. It is possible, however, to undelete almost any file, all of which are saved on the computer's hard drive even after you have intentionally deleted them and after it appears data have been lost. On the Windows® 98 operating system, the File Allocation Table (FAT) lists where a deleted file is still being used on the drive, even though you think it’s not there. Operating systems such as Windows® Vista with a Master File Table (MFT), instead of the FAT, can similarly find files and related data storage clusters on the disk drive. It is because of this setup that files can still be recovered. There are several file recovery and computer disaster recovery software programs that can find data you thought was gone forever. Such programs do a search on your hard drive for FAT or MFT entries. Once the deleted file is located, the software scans the rest of the table to see whether other locations, or clusters, on the disk previously used by the deleted file are being used by other files. If so, the file you’re looking for was probably overwritten and you’ll never see it again. There are programs that can undelete files in Apple® Macintosh operating systems in a similar way. It is possible to recover files after a computer crash, because the system saves them even after the recycling bin is emptied. The deleted files are specially marked in the MFT, indicating the clusters that were occupied by it are empty. In Windows®, the content of these indicated clusters is untouched, so the data stored there still exist, even if the file has been deleted or the system has crashed. Using software to undelete deleted files is the most popular method of file retrieval. The best way to use the program is to load it directly from a compact disk (CD) or flash drive without installing it on the hard disk. There are also professional data recovery services that can undelete files on your computer. You should always make a copy of the deleted sector housing the deleted files. There are many software programs that automate this process for you. If the data are in these parts of the hard drive, yet not visible to the operating system, there is a good chance you can undelete files you didn’t think could be recovered.
Drugs that interfere with bile acid recycling can prevent several aspects of NASH (nonalcoholic steatohepatitis) in mice fed a high-fat diet, scientists from Emory University School of Medicine and Children’s Healthcare of Atlanta have shown. The findings suggest that these drugs, known as ASBT inhibitors, could be a viable clinical strategy to address NASH, an increasingly common liver disease. The results were published in Science Translational Medicine on September 21, 2016. “By targeting a process that takes place in the intestine, we can improve liver function and reduce insulin resistance in a mouse model of NASH,” says senior author Saul Karpen, MD, PhD. “We can even get fat levels in the liver down to what we see in mice fed a regular diet. These are promising results that need additional confirmation in human clinical trials.” Karpen is Raymond F. Schinazi distinguished professor of pediatrics at Emory University School of Medicine and chief of the Division of Pediatric Gastroenterology, Hepatology and Nutrition at Children’s Healthcare of Atlanta. He and Paul Dawson, PhD, Emory professor of pediatrics, jointly run a lab that investigates the role of bile acids in liver disease. Many people in developed countries have non-alcoholic fatty liver disease, an accumulation of fat in the liver that is linked to diet and obesity. Fatty liver disease confers an elevated risk of type II diabetes and heart disease. NASH is a more severe inflammation of the liver that can progress to cirrhosis, and is a rising indication for liver transplant. Besides diet and exercise, there are no medical treatments for NASH, which affects an estimated 2 to 5 percent of Americans. Read more
For Immediate Release August 27, 2018 SOS Press Office SACRAMENTO - This August marks 98 years since the 19th Amendment to the US Constitution was ratified and formally adopted, giving women the right to vote. To celebrate this anniversary the California State Archives has launched a new digital compilation of records relating to the women’s suffrage movement in California. This is the first time that these records have been compiled into a publicly available digital compilation. “The adoption of the 19th Amendment is the moment that the right to vote finally included women, but the struggle for suffrage took years,” Secretary of State Alex Padilla said. “This movement is an integral part of our democracy’s history and must not be forgotten. The State Archives has digitized records that tell the story of the suffrage movement in California, from 19th Century efforts to give women the right to vote in state elections to California’s role in passing the 19th Amendment.” “When California women won the right to vote, they forever changed the state with an expanded and more inclusive democracy. The suffrage victory also gave rise to the League of Women Voters, created to finish the fight and aid in the reconstruction of our nation" said Melissa Breach, Executive Director of the League of Women Voters of California. "Today, we reflect on how far we have come and acknowledge that the fight remains yet unfinished. The struggle continues to ensure our electorate reflects California's rich diversity and to make certain all women are free to fulfill their highest potential." The records featured in this collection highlight suffragette efforts in California, including: California’s women’s suffrage campaign inspired other states to join the movement, and, nearly a decade after women won the right to vote in California, women were granted the right to vote nationally with the official adoption of the 19th Amendment on August 26th, 1920. August 26 is now recognized as Women’s Equality Day. Secretary Padilla has made digitizing the treasurers of the State Archives a priority. The California Digital Archives includes over a dozen digital exhibits on Google Arts and Culture, as well as several completely digitized collections of records on the Omeka platform. The Digital Archives were recognized as the 2018 IDEAS Award recipient at the National Association of Secretaries of State conference.
BMI Calculator : Body mass index (BMI) is a number based from the mass (weight) and height of an individual. The BMI is described as the body mass separated by the square of the body height, and is generally expressed in units of kg/m2, producing from mass in kilograms and height in meters. The BMI may also be identified using a table or chart which shows BMI as a function of mass and height making use of contour lines or colors for different BMI different categories, and which may use some other units of measurement (converted to metric units for the calculation). The BMI is a easy rule of thumb used to commonly categorize a person as underweight, normal weight, overweight, or obese based on tissues mass (muscle, fat, and bone) and height. That categorization is the topic of some debate about exactly where on the BMI scale the separating lines between categories should be positioned. Commonly approved BMI ranges are underweight: under 18.5 kg/m2, normal weight: 18.5 to 25, overweight: 25 to 30, obese: over 30. BMIs under 20.0 and over 25.0 have been connected with greater all-cause mortality, growing risk with distance from the 20.0–25.0 range. The occurrence of overweight and obesity is the maximum in the Americas and lowest in Southeast Asia. The occurrence of overweight and obesity in high income and upper middle income nations around the world is more than twice that of low and lower middle cash flow countries. Background of BMI Calculator Adolphe Quetelet, The Belgian astronomer, mathematician, statistician, and sociologist, developed the base of the BMI in between 1830 and 1850 as he created what he known as “social physics”. The modern-day term “body mass index” (BMI) for the actual ratio of human body weight to squared height was gave in a paper released in the July 1972 edition of the Journal of Chronic Diseases by Ancel Keys and others. In this paper, Keys contended that what he known as the BMI was “…if not fully satisfactory, at the very least as good as any other relatives weight index as an indicator of relative obesity”. The curiosity in an index that measures body fat arrived with observed growing obesity in prosperous Western societies. Keys clearly judged BMI as proper for population research and unacceptable for individual evaluation. However, due to its straightforwardness, it has come to be widely used for primary diagnoses. Extra metrics, such as waist circumference, can be more helpful. The BMI is globally expressed in kg/m2, resulting from mass in kilograms and height in meters. If pounds and inches are used, a transformation factor of 703 (kg/m2)/(lb/in2) must be applied. Whenever the term BMI is used casually, the units are usually omitted. BMI provides a simple number measure of a person’s thickness or thinness, permitting health professionals to discuss weight issues more objectively with their own patients. BMI was created to be used as a easy means of classifying regular sedentary (physically inactive) populations, with an typical body composition. For these kinds of individuals, the value recommendations as of 2014 are as follows: a BMI from 18.5 up to 25 kg/m2 may reveal optimal weight, a BMI lesser than 18.5 indicates the individual is underweight, a number from 25 up to 30 may indicate the person is overweight, and a quantity from 30 upwards indicates the person is obese. Lean male athletes often have a high muscle-to-fat ratio and therefore a BMI that is deceivingly high relative to their body-fat percentage. BMI is relative to the mass and inversely proportional to the square of the height. So, if all entire body dimensions double, and mass scales normally with the cube of the height, then BMI increases instead of remaining the same. This outcomes in taller people having a reported BMI that is uncharacteristically high, in comparison to their actual body fat levels. In assessment, the Ponderal index is based on the natural scaling of mass with the third power of the height. Then again, many taller individuals are not just “scaled up” short individuals but tend to have smaller frames in proportion to their height. Carl Lavie has composed that, “The B.M.I. tables are superb for identifying obesity and body fat in large masses, but they are far less trustworthy for identifying fatness in individuals.” BMI Calculator Categories A regular use of the BMI is to evaluate how much an individual’s body weight leaves from what is normal or desired for a person’s height. The weight excess or insufficiency may, in part, be paid for for by body fat (adipose tissue) although other aspects such as muscularity also affect BMI significantly (see conversation below and overweight). BMI is used to generally define different weight groups in adults 20 years old or older. The exact same groups apply to both men and women. The WHO respect a BMI of less than 18.5 as underweight and may reveal malnutrition, an eating disorder, or other wellness problems, while a BMI equal to or greater than 25 is considered over weight and above 30 is considered chronically overweight.These varies of BMI values are valid only as statistical categories. - Underweight: BMI is less than 18.5 - Normal weight: BMI is 18.5 to 24.9 - Overweight: BMI is 25 to 29.9 - Obese: BMI is 30 or more What exactly is my BMI? There are many ways to find your BMI. - Charts and tables, such as the one above, are one particular easy way to determine out your BMI. To apply the table above, locate your height on the left side of the chart, then go around to the weight that is nearest to yours. - There are also many online BMI calculators, such as this one on our website. At the top notch of the chart you can see your BMI, and at the bottom part of the chart you can see which group you fit into – healthy weight, overweight, or obese chart that assists you figure out your BMI based on your height and weight. Think about a woman who is 5 ft. 4 in. tall is regarded as overweight (BMI is 25 to 29) if she weighs in at between 145 and 169 pounds. She is regarded as obese (BMI is 30 or more) if she weighs 174 pounds or more. A man who is 5 ft. 10 in. tall is considered overweight (BMI is 25 to 29) if he weighs between 174 and 202 pounds, and is obese (BMI is 30 or more) if he weighs 209 pounds or more. My BMI Calculator You can also analyze your own BMI. The real formula to determine BMI uses metric system dimensions: weight in kilograms (kg) divided by height in meters, squared (m2). When using pounds and inches, the method needs to be modified slightly. Multiply your weight in pounds by 703. Separate that by your height in inches, squared: BMI = (your weight in pounds x 703) ÷ (your height in inches x your height in inches) For instance, if you weigh 120 pounds and are 5 ft. 3 in. (63 in.) tall: BMI = (120 x 703) ÷ (63 x 63) or 84,360 ÷ 3969 = 21.3 This is nicely within the healthy weight range. Are there any issues using the BMI Calculator? Doctors and nurses frequently use BMI to help discover out if a person might have a bodyweight problem. BMI provides a good estimation of total body fat for most people, but it does not work well for everyone. For example, body builders or other very muscular people can have a large BMI because of their own muscle mass, even though they’re not always overweight. The BMI can also undervalue body fat in people who have lost muscle mass , such as some older men and women. For most grown ups, the BMI is a good way to get an idea of healthful weight ranges. But it’s not constantly the final term in determining if a person is overweight or obese. There are other stuff to think about when judgment how much somebody should weigh. A individual with a high BMI should be examined by a health care provider, who may use other factors such as skinfold width (a measure of body fat), waist size, evaluations of diet and family health problems, and other elements to find out if a humans weight might pose a health risk. BMI Calculator for Children and Teens BMI can be determined the same way for kids and teens as it is for adults, but the figures don’t have the same meaning. This is simply because the normal amount of body fat alterations with age in children and teens, and is various between boys and girls. So for children, BMI levels that determine being normal weight or overweight are based on the child’s age and gender. To account for this, the US Centers for Disease Control and Prevention (CDC) has created age- and gender-specific progress charts. These charts are used to convert a BMI number into a percentile based on a young ones sex and age. The percentiles are then utilized to determine the different weight groups: - Underweight: less than the 5th percentile - Normal weight: 5th percentile to less than the 85th percentile - Overweight: 85th percentile to less than the 95th percentile - Obese: 95th percentile or higher An easy way to determine your child’s BMI percentile is to use the CDC’s online BMI percentile calculator at this web site. Even in a younger person, being overweight or obese can cause health problems. And it may immediately increase the risk for specific health problems later in life, which includes some kinds of cancer. It also raises the chances of being obese or obese as an adult, as well as the risk of health issues that can come with this.
Wildlife Habitat Preservation A constant source of delight to tourists and locals alike, the fauna of the Nandamojo valley also plays a vital role in the forest ecosystem—and in its restoration. Since ROW’s inception 10 years, ago, thousands of trees have been planted by hundreds of volunteers in riparian buffer zones, along denuded coastal strands and in vital recharge zones. Within collaborating residential developments, ROW has restored a seasonal pond for waterfowl, developed and implemented an array of techniques to ensure habitat connectivity both above and below access roads and perfected erosion and runoff control techniques with positive impacts on water quality and aquatic habitat. Other initiatives being undertaken by ROW include: - Reestablishing the ridge to reef biological corridor from Estero Congo and mangrove to the intact dry tropical forest leading up to the La Florida watershed ridge. Efforts will benefit wildlife preservation via restoration, corridor and habitat expansion and species diversification in reforestation efforts. - Restoring additional wildlife ponds. - Completing more thorough surveys of wildlife populations involving local students as para-taxonomists. - Continuing outreach at the community level, including wildlife identification and training of eco-tourism guides. Though ROW is concerned with protecting all of the native wildlife in our watershed, we maintain a particular focus on the region’s birds, turtles and monkeys. To maximize our effectiveness, we partner with local organizations that are knowledgeable about these species and already have the staff and funding to support specific programs. Working with our partner, the Sea Turtle Conservation Project (Asociación de Conservación Vida Verdiazul), ROW is helping to reestablish vegetative cover at the mouth of the Nandomojo River near Playa Junquillal. For three endangered species of sea turtles, Playa Junquillal is one of the most important nesting beaches in Costa Rica. Research, funded by the World Wildlife Fund National Wildlife Federation, is ongoing to understand why the egg hatch rates are so high at Junquillal. Eco-Tourism efforts are underway to protect the eggs from collectors who subscribe to local folklore that eating them enhances male virility. ROW has partnered with Save the Monkeys (Asociacion SalveMonos Salvemonos), to assist local troops of Howler monkeys in navigating overhead powerlines, roadways and commercial and residential developments. Save the Monkeys has worked successfully to build awareness and generate support for implementing integrated solutions. Their efforts currently include population and migration studies, habitat restoration, reforestation of critical corridors, and building "monkey bridges" where the Howler's aerial corridors have been interrupted. ROW coordinated an important initiative to study and enhance migratory bird habitat in the Nandomojo watershed. The Nandavi Neotropical Bird Project was funded by a grant from the US Fish & Wildlife Service, secured in partnership with Applied Ecological Services. Activities conducted during the 2007-2008 project year included surveying bird populations in mono-crop plantations and mixed species forests; establishing a restoration transition strategy for plantations to create mixed diversity forests that support more native and migratory bird diversity and initiating local education efforts to support the watershed restoration vision and plan. Specific achievements were as follows: - Conducted vegetation survey of the different ecosystems in the watershed with emphasis on habitat plants - Completed Junquillal mangrove ecological situation analysis as part of ongoing cooperation with the conservation of the mangrove ecosystem in Junquillal - Assessed sociological role of local networks and key individuals for restoration work - Developed bird identification course for school children—a step towards training local eco-tourism guides - Held leadership workshop for locals—in collaboration with Universidad Nacional and FUNGAP
LILONGWE, MALAWI - Malawi this month opened the first African Drone and Data Academy, with support from the United Nations Children's Fund, UNICEF. The academy aims to improve drone technology skills across Africa, beginning with Malawi and neighboring countries. Karen Asaba developed an interest in drones at Uganda Flying Labs, a Kampala-based drone mapping and data hub. As a student at Malawi’s just opened African Drone and Data Academy, she gets to learn how to build one. "Right now, we are learning how to assemble a drone from the start, considering its weight, considering the central gravity, considering the GPS and all the electronics that are involved in making the drone,” she said. Asaba is one of 26 students from across Africa in the first three-month course at the academy, learning to construct and pilot drones. The United Nations Children's Fund (UNICEF) is backing the program, which this year is expected to train 150 students. UNICEF says the academy, and the launch of Africa’s first drone corridor in Malawi in 2016, will promote drones for development and humanitarian use. Rudolf Schwenk, the country representative for UNICEF in Malawi, says the drones will have broad practical applications. "For example, transporting medical supplies to remove areas or transporting samples very fast, where it will take a lot of time to transport them. We have also worked on emergency preparedness and response because with data and drone imagery, you can see where flooding will happen,” Schwenk said. The drone course was developed with Virginia Polytechnic Institute and State University, better known as Virginia Tech. Kevin Kochersberger, an associate professor at Virginia Tech, explained the course's components. "We go through three modules in this program. They have gone [through] drone logistics, drone technologies so they become very functional in drone[s] - not only being pilots, but they operate and maintain the drones as well,” Kochersberger said. The drone academy has inspired some students like Thumbiko Nkwawa Zingwe to reach for the stars. I have a vision that I can start a first Malawian space agency, which can be utilizing geo-information data for different applications. For example, here in Malawi we are so susceptible to floods as a geo-hazardous anomaly,” Zingwe said. The African Drone and Data Academy’s first graduates are expected in March. The academy plans to partner with Malawi University of Science and Technology for a free master's degree program in drone technology by 2022.
In January 2017, the CIA release a large number of newly-declassified documents about information collected on the Soviet Union. One of those documents included two pages of Russian jokes about the Soviet Union. Headed “Soviet Jokes for the DDCI” (Deputy Director of Central Intelligence), the jokes make reference to Mikhail Gorbachev, so they date from at least as late as the 1980s. The jokes are surprisingly directed at all Soviet leaders, from Lenin to Brezhnev. It’s good to know there were chances for levity behind the Iron Curtain. One thing’s for sure, people didn’t love Communism as much as the Russians led us to believe. A worker standing in a liquor line says, “I have had enough, save my place, I am going to shoot Gorbachev.” Two hours later he returns to claim his place in line. His friends ask, “Did you get him?” “No,” he replied. “The line there was even longer than the line here.” Q: What’s the difference between Gorbachev and Dubcek*? A: Nothing, but Gorbachev doesn’t know it yet. *(Alexander Dubcek led the Czech resistance to the Warsaw Pact during the Prague Spring of 1968, but was forced to resign) Sentence from a schoolboy’s weekly composition class essay: “My cat just had seven kittens. They are all communists.” Sentence from the same boy’s composition the following week: “My cat’s seven kittens are all capitalists.” Teacher reminds the boy that the previous week he had said the kittens were communists. “But now they’ve opened their eyes,” replies the child. A Chukchi (a tribe of Eskimo-like people on Russia’s northwest coast) is asked what he would do if the Soviet borders were opened. “I’d climb the highest tree,” he replies. Asked why, he responds: “So I wouldn’t get trampled in the stampede out!” Then he is asked what he would do if the U.S. border is opened. “I’d climb the highest tree,” he says, “so I can see the first person crazy enough to come here.” A joke heard in Arkhangelsk has it that someone happened to call the KGB headquarters just after a major fire. “We cannot do anything. The KGB has just burned down!” he was told. Five minutes later, he called back and was told again the KGB had burned. When he called a third time, the telephone operator recognized his voice and asked “why do you keep calling back? I just told you the KGB has burned down.” “I know,” the man said. “I just like to hear it.” A train bearing Stalin, Lenin, Khrushchev, Brezhnev, and Gorbachev stops suddenly when the tracks run out. Each leader applies his own, unique solution. Lenin gathers workers and peasants from miles around and exhorts them to build more track. Stalin shoots the train crew when the train still doesn’t move. Khrushchev rehabilitates the dead crew and orders the tracks behind the train ripped up and relaid in front. Brezhnev pulls down the curtains and rocks back and forth, pretending the train is moving. And Gorbachev calls a rally in front of the locomotive, where he leads a chant: “No tracks! No tracks! No tracks!” Ivanov: Give me an example of perestroika*. Sidorov: (Thinks) How about menopause? * The literal meaning of perestroika is “restructuring” – usually referring to economic liberalization by Gorbachev. An old lady goes to the Gorispolkom* with a question, but by the time she gets to the official’s office she has forgotten the purpose of her visit. “Was it about your pension?” the official asks. “No, I get 20 Rubles a month, that’s fine,” she replies. “About your apartment?” “No, I live with three people in one room of a communal apartment, I’m fine,” she replies. She suddenly remembers: “Who invented Communism? –– the Communists or scientists?” The official responds proudly, “Why the Communists of course!” “That’s what I thought,” the babushka** says. “If the scientists had invented it, they would have tested it first on dogs!” * Gorispolkom is the local political authority of a Soviet city. ** A babushka is another term for older woman or grandmother. An American tells a Russian that the United States is so free he can stand in front of the White House and yell “To hell with Ronald Reagan.” The Russian replies: “That’s nothing. I can stand in front of the Kremlin and yell, ‘to hell with Ronald Reagan’ too.” A man goes into a shop and asks “You don’t have any meat?” “No,” replies the sales lady. “We don’t have any fish. It’s the store across the street that doesn’t have any meat.” A man is driving with his wife and small child. A militiaman pulls them over and makes the man take a breathalyzer test. “See,” the militiaman says, “you’re drunk.” The man protests that the breathalyzer must be broken and invites the cop to test his wife. She also registers as drunk. Exasperated, the man invites the cop to test his child. When the child registers drunk as well, the cop shrugs and says “Yes, perhaps it is broken,” and sends them on their way. Out of earshot the man tells his wife, “See, I told you is wouldn’t hurt to give the kid five grams of vodka.” We can’t put the whole Milky Way on a scale, but astronomers have been able to come up with one of the most accurate measurements yet of our galaxy’s mass, using NASA’s Hubble Space Telescope and the European Space Agency’s Gaia satellite. The Milky Way weighs in at about 1.5 trillion solar masses (one solar mass is the mass of our Sun), according to the latest measurements. Only a tiny percentage of this is attributed to the approximately 200 billion stars in the Milky Way and includes a 4-million-solar-mass supermassive black hole at the center. Most of the rest of the mass is locked up in dark matter, an invisible and mysterious substance that acts like scaffolding throughout the universe and keeps the stars in their galaxies. Earlier research dating back several decades used a variety of observational techniques that provided estimates for our galaxy’s mass ranging between 500 billion to 3 trillion solar masses. The improved measurement is near the middle of this range. “We want to know the mass of the Milky Way more accurately so that we can put it into a cosmological context and compare it to simulations of galaxies in the evolving universe,” said Roeland van der Marel of the Space Telescope Science Institute (STScI) in Baltimore, Maryland. “Not knowing the precise mass of the Milky Way presents a problem for a lot of cosmological questions.” On the left is a Hubble Space Telescope image of a portion of the globular star cluster NGC 5466. On the right, Hubble images taken ten years apart were compared to clock the cluster’s velocity. A grid in the background helps to illustrate the stellar motion in the foreground cluster (located 52,000 light-years away). Notice that background galaxies (top right of center, bottom left of center) do not appear to move because they are so much farther away, many millions of light-years. (NASA, ESA and S.T. Sohn and J. DePasquale) The new mass estimate puts our galaxy on the beefier side, compared to other galaxies in the universe. The lightest galaxies are around a billion solar masses, while the heaviest are 30 trillion, or 30,000 times more massive. The Milky Way’s mass of 1.5 trillion solar masses is fairly normal for a galaxy of its brightness. Astronomers used Hubble and Gaia to measure the three-dimensional movement of globular star clusters — isolated spherical islands each containing hundreds of thousands of stars each that orbit the center of our galaxy. Although we cannot see it, dark matter is the dominant form of matter in the universe, and it can be weighed through its influence on visible objects like the globular clusters. The more massive a galaxy, the faster its globular clusters move under the pull of gravity. Most previous measurements have been along the line of sight to globular clusters, so astronomers know the speed at which a globular cluster is approaching or receding from Earth. However, Hubble and Gaia record the sideways motion of the globular clusters, from which a more reliable speed (and therefore gravitational acceleration) can be calculated. The Hubble and Gaia observations are complementary. Gaia was exclusively designed to create a precise three-dimensional map of astronomical objects throughout the Milky Way and track their motions. It made exacting all-sky measurements that include many globular clusters. Hubble has a smaller field of view, but it can measure fainter stars and therefore reach more distant clusters. The new study augmented Gaia measurements for 34 globular clusters out to 65,000 light-years, with Hubble measurements of 12 clusters out to 130,000 light-years that were obtained from images taken over a 10-year period. When the Gaia and Hubble measurements are combined as anchor points, like pins on a map, astronomers can estimate the distribution of the Milky Way’s mass out to nearly 1 million light-years from Earth. Hubblecast 117 Light: Hubble & Gaia weigh the Milky Way “We know from cosmological simulations what the distribution of mass in the galaxies should look like, so we can calculate how accurate this extrapolation is for the Milky Way,” said Laura Watkins of the European Southern Observatory in Garching, Germany, lead author of the combined Hubble and Gaia study, to be published in The Astrophysical Journal. These calculations based on the precise measurements of globular cluster motion from Gaia and Hubble enabled the researchers to pin down the mass of the entire Milky Way. The earliest homesteaders of the Milky Way, globular clusters contain the oldest known stars, dating back to a few hundred million years after the big bang, the event that created the universe. They formed prior to the construction of the Milky Way’s spiral disk, where our Sun and solar system reside. “Because of their great distances, globular star clusters are some of the best tracers astronomers have to measure the mass of the vast envelope of dark matter surrounding our galaxy far beyond the spiral disk of stars,” said Tony Sohn of STScI, who led the Hubble measurements. The international team of astronomers in this study are Laura Watkins (European Southern Observatory, Garching, Germany), Roeland van der Marel (Space Telescope Science Institute, and Johns Hopkins University Center for Astrophysical Sciences, Baltimore, Maryland), Sangmo Tony Sohn (Space Telescope Science Institute, Baltimore, Maryland), and N. Wyn Evans (University of Cambridge, Cambridge, United Kingdom). The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C. This article originally appeared on NASA. Follow @NASA on Twitter. It’s never too soon to start planning an epic spring or summer vacation. For disabled veterans living stateside, 2020 could be the best year yet for outdoor recreation. This is because the National Parks Service offers disabled veterans an amazing deal on their next visit. From Hawai’i Volcanoes National Park to Dry Tortugas National Park and the Mt. Zion and the Smokey Mountains in between, they’re all at our fingertips – and it’s now totally free. More than 330 million people visit America’s most beautiful parks every year, and the parks are about to see a huge influx from American veterans due to this partnership between the U.S. Department of the Interior, National Park Service, Forest Service, Fish and Wildlife Service, Bureau of Land Management, the Army Corps of Engineers and the Bureau of Reclamation. Disabled veterans can get free access with an Access Pass on their cars, granting free access to anyone in that vehicle. On top of access, the access pass gives holders a discount on expanded amenity fees at many National Parks sites, which can include campsite fees, swimming, boat launches, and group tours. All a veteran has to do to be one of those who enter the parks for free is submit proper documentation of his or her service-connected disability, along with proof of identification and a processing fee. A Veterans Administration letter of service connection is enough to satisfy this requirement, and the passes can even be ordered online. This could be you. (Emily Ogden/National Parks Service) On top of the disability award letter from the VA, qualified veterans can also use a VA summary of benefits, or proof of SSDI income to prove their disability status. Once proof of residency is also established, and the processing fee is paid, all the veteran has to do is wait. Their new lifetime access pass will arrive 3-5 weeks after sending the application. If online payments aren’t available to the veteran, the passes can also be acquired by paper mail or by stopping into an access pass-issuing facility. The documentation is still required, but getting the pass is a breeze. The National Parks Service really is full of amazing natural wonders, which make this lifetime pass one of the biggest benefits of having served. The NPS is full of places you’ve always heard about, but likely have never seen: Big Bend, Arches, Denali, Sequoia, Crater Lake, Petrified Forest, Glacier Bay, Hot Springs, and so much more. Summer vacations will never be the same. If you were worried that a Marvel Studios version of Deadpool would somehow make the anti-hero less vulgar and more kid-friendly, Ryan Reynolds wants you not to worry. Speaking on Christmas Eve on Live With Kelly and Ryan, the Deadpool star said that even though the threequel is being developed at a new, more family-friendly studio, fans should still expect it to be a little bit raunchy. “Yeah, we’re working on it right now with the whole team,” Reynolds said on Christmas Eve. “We’re over at Marvel [Studios] now, which is the big leagues all a sudden. It’s kind of crazy. So yeah, we’re working on it.”Previously, Reynolds doubled-down on the idea that Deadpool 3 would be R-Rated, which is something a lot of folks have wondered about since the rights to Deadpool transferred over to Disney during the big Fox-Disney merger in early 2019. For those who are maybe confused, prior to 2018, Deadpool movies existed in the 20th Century Fox superhero universe, which is why references to the existing X-Men movies cropped-up in Deadpool 2. But now, Deadpool and the X-Men are all under the same roof, which is how it’s always been in the comic books. And while there’s been talk that the X-Men will be rebooted entirely in the sprawling Marvel Cinematic Universe, it seems like Deadpool will remain Deadpool. At least for now. Reynolds didn’t mention a release date, so until that happens, we can’t really know for sure. Last Christmas, in 2018, Fox did release a PG version of Deadpool 2 called Once Upon a Deadpool, which suggests there is a way to keep the jerky version of Wade Wilson kid-friendly. In fairness, a Deadpool who doesn’t swear is fine. As long as he has Fred Savage to troll him, we’re good. This article originally appeared on Fatherly. Follow @FatherlyHQ on Twitter. At first glance, it might seem obvious why Japan would choose to take on a country like the United States. While Americans were still struggling with the Great Depression, Japan’s economy was growing and hot. Japan had hundreds of thousands of men in uniform and a string of military victories under its belt. The U.S. was a third-rate military power whose day had come and gone in World War I – and Americans weren’t thrilled about another war. But the Japanese seriously underestimated one important factor: The American Worker. Up yours, Japanese Empire. Judging the United States’ capacity for war during the 1930s was Japan’s fatal mistake. Sure, we’d had a little too much fun at the speakeasy during the 1920s, but we were poised for the most incredible puke and rally the world had ever known, and anyone looking for it would have been able to see it. Unfortunately, the Japanese were a little high on their own supply at the time. Convinced of Japanese superiority, they thought themselves nigh-invincible and that the U.S. would crumble if it needed to unify or die. In reality, things were much different. The U.S. had twice the population of Japan and 17 times more tax revenues. Americans produced five times more steel, seven times more coal, and could outproduce the Japanese automobile industry by a factor of 80:1. The American worker had the highest per capita output of any worker in the world. What’s more, is we were one of very few countries willing to let women work in our very modern factories. So don’t f*ck with the Arsenal of Democracy. Even before the war, U.S. industrial capacity was greater than all of the Axis countries combined. As a matter of fact, the United States’ output was almost greater than all the other major powers involved in the war. And that was before the U.S. declaration of war allowed the President to take control of American industry. By the time the U.S. entered the war, the Lend-Lease Act had already pulled America out of its depression and was basically supplying the Allied powers with American-built equipment and vehicles as it had for years. All we had to do was start using them ourselves. As time went on, the U.S. economy was growing by 15 percent annually, while every other belligerent saw a plateau in growth or the destruction of their economies altogether. By the end of the war, American industrial output wasn’t even close to overheating – we were just getting started. A socially conscious hacker known as “The Jester” put one over on the Democratic People’s Republic of Korea recently. To add to his long list of hilarious practical jokes with a social-conscious message, the hacktivist hijacked a propaganda-laden North Korean shortwave radio station. Comrades! We interrupt regular scheduled Russian Foreign Affairs Website programming to bring you the following important message,” he wrote. “Knock it off. You may be able to push around nations around you, but this is America. Nobody is impressed. While no one knows who he is, The Jester is a self-proclaimed patriot hacker, who thinks Anonymous is a bunch of “blowhards” whose work amounts to a “hill of beans.” Evidence in The Jester’s work makes people believe he is either a military veteran or former military contractor — he even leaves a calling card for his work: “Tango Down.” Either way, he’s on our side. A god among us has hijacked 6400kHz (North Korean station) and is playing the Final Countdown https://t.co/rPJ1aEccUs The North Korean radio station hit by The Jester is used to broadcast coded messages and often used as a warning post for outside media before the regime does something provocative. It also re-broadcasts programming from the appropriately-named Pyongyang Broadcasting Station… aka “Pyongyang BS.” The Humvee (High-Mobility Multi-purpose Wheeled Vehicle) is a classic icon of today’s military, often seen wherever there is a war or a disaster. However, just as the Jeep proved to be not quite what would be needed for World War II, the Humvee proved to have some shortfalls during the War on Terror. The Joint Light Tactical Vehicle from Oshkosh is intended to at least partially replace the Humvee. The Humvee will be sticking around – possibly until 2050 – in many of the support units, as opposed to fighting in front-line combat situations. The big difference will be in the level of protection. Humvees, even when up-armored, couldn’t completely protect troops from the effects of roadside bombs and other improvised explosive devices. The JLTV addresses that through providing MRAP-level protection in a lightweight package that can be hoisted by a helicopter like the CH-47F Chinook or a CH-53K King Stallion. The first of the JLTVs will be delivered to the 10th Mountain Division at Fort Drum, followed by the 173rd Airborne Brigade in Italy. Both units are expected to receive their vehicles in 2019. The JLTV has four variants in service, the M1278 Heavy Gun Carrier, the M1279 Utility vehicle, the M1280 General Purpose, and the M1281 Close Combat Weapons Vehicle. Check the video below to see how the JLTV and the Humvee stack up against each other. Marines are about to face far-less predictable training that will challenge young leaders to outsmart sophisticated enemies with high-tech weapons and tools. More force-on-force freestyle training will replace scripted scenarios in the years ahead, Lt. Gen. David Berger, head of Marine Corps Combat Development Command, told Military.com. “We need to teach Marine leaders how to think on their feet,” he said. “We’re going to see a lot more of that graduate- or varsity-level thinking leader, and I need them figuring out how they can outthink me.” The move follows a new national defense strategy that warns of long-term threats from strategic competitors like Russia and China. To be ready, the Marine Corps “must move beyond ‘scripted’ live-fire maneuvers and incorporate more force-on-force training in a free-play environment,” Commandant Gen. Robert Neller wrote in a Sept. 26, 2018 white letter to senior leaders. “To meet the challenges of a peer-to-peer fight, we must incorporate independent actions and opposing will in our training at all levels,” Neller wrote. “Just as iron sharpens iron, an aggressive [force-on-force] training regime will test the limits of our capabilities, refine our actions, and prepare us for the fight to come.” Marines with 1st Battalion, 3rd Marine Regiment, dart across a danger area to clear remaining compounds in their area of operation at Marine Corps Training Area Bellows, Hawaii, Sept. 30, 2013. (U.S. Marine Corps photo by Cpl. Matthew Callahan) Much of that will take shape at Marine Corps Air-Ground Combat Center Twentynine Palms in California, Berger said, where units complete the Integrated-Training Exercises that prepare them for combat. The live-fire maneuver training Marines have practiced for decades and the simulations that ramped up during the wars in Iraq and Afghanistan won’t go away. That training will just be balanced with peer-to-peer fights during which one group of Marines is tasked with playing the good guys and the others, the foe. And there are benefits to being on either side of those mock fights, Berger said. “We’ll get better, but the training will also be more dynamic,” he said. “We need to fight as the foe would fight, so think about how they would be organized, trained and equipped. We also must better understand how they would use rockets, drones, planes and more.” Marine leaders are still working on guidance that will better shape the plans for force-on-force training. In the meantime, Neller said the entire service must develop the mindset and skills necessary to prevail in the coming fight. “We must ruthlessly test ourselves, conduct honest after-action reviews, make refinements and test ourselves again,” he wrote. This article originally appeared on Military.com. Follow @militarydotcom on Twitter. Nazi troops invaded Poland on September 1, 1939, despite the best efforts of Captain Witold Pilecki and his fellow Polish soldiers. On November 9th of that same year, Witold and Major Wlodarkiewicz founded the Tajna Armia Polska (TAP or Polish Secret Army), an underground organization that eventually became consolidated with other resistance forces into The Home Army. Not long after the formation of organized widespread Polish Resistance, its members began hearing reports of the conditions within the newly constructed Auschwitz Concentration Camp put into operation in the Spring of 1940. Those first reports originated with prisoners released from the camp and from civilians such as railroad employees and local residents. In order to cut through the very troubling rumors and figure out exactly what was going on there, Pilecki came up with a bold plan- become a prisoner at Auschwitz. With a little convincing, his superiors eventually agreed to allow him to go. In order to help protect his wife and children after he was captured, he took on the alias Tomasz Serafinski, much to the chagrin of the real Tomasz Serafinski who was thought to be dead at the time (hence why his papers and identity were chosen), but was not. Later, the real Tomasz had some trouble because of Pilecki using his papers and name (more on this in the Bonus Facts below). According to Eleonora Ostrowska, owner of an apartment Pilecki was at when he was taken, when a Nazi roundup began (lapanka, where a city block would suddenly be closed off and most of the civilians inside would be rounded up and sent to slave labor camps and sometimes even just mass-executed on the spot), a member of the resistance came to help Pilecki hide. Instead, Ostrowska said “Witold rejected those opportunities and didn’t even try to hide in my flat.” She reported that soon, a German soldier knocked at the door and Pilecki whispered to her “Report that I have fulfilled the order,” and then opened the door and was taken by the soldier along with about 2,000 other Poles in Warsaw on September 19, 1940. It is important to note here that he didn’t really know if he’d be sent to Auschwitz at this point. As Dr. Daniel Paliwoda noted of Pilecki’s capture, “Since the AB Aktion and roundups were still going on, the Nazis could have tortured and executed him in occupied Warsaw’s Pawiak, Mokotów, or any other Gestapo-run prison. They could have taken him to Palmiry to murder him in the forest. At the very least, they could have sent him to a forced labor colony somewhere in Germany.” While he was willingly surrendering with the hope of being sent to Auschwitz, Pilecki lamented the behavior of his fellow countrymen during the roundup. “What really annoyed me the most was the passivity of this group of Poles. All those picked up were already showing signs of crowd psychology, the result being that our whole crowd behaved like a herd of passive sheep. A simple thought kept nagging me: stir up everyone and get this mass of people moving.” As he had hoped (perhaps the only person to ever hope such a thing), he was sent to Auschwitz. He later described his experience upon arrival: We gave everything away into bags, to which respective numbers were tied. Here our hair of head and body were cut off, and we were slightly sprinkled by cold water. I got a blow in my jaw with a heavy rod. I spat out my two teeth. Bleeding began. From that moment we became mere numbers – I wore the number 4859… We were struck over the head not only by SS rifle butts, but by something far greater. Our concepts of law and order and of what was normal, all those ideas to which we had become accustomed on this Earth, were given a brutal kicking. Pilecki also noted that one of the first indications that he observed that Auschwitz was not just a normal prison camp was the lack of food given to prisoners; in his estimate, the rations given to prisoners were “calculated in such a way that people would live for six weeks.” He also noted that a guard at the camp told him, “Whoever will live longer — it means he steals.” Assessing the conditions inside Auschwitz was only part of Pilecki’s mission. He also took on responsibility for organizing a resistance force within the camp, the Zwiazek Organizacji Wojskowej (ZOW). The goals of ZOW included- improving inmate morale, distributing any extra food and clothing, setting up an intelligence network within the camp, training prisoners to eventually rise up against their guards and liberate Auschwitz, and getting news in and out of Auschwitz. Ensuring secrecy of the ZOW led Pilecki to create cells within the organization. He trusted the leaders of each cell to withstand interrogation by the guards, but even so each leader only knew the names of the handful of people under his command. This limited the risk to the entire organization should an informant tip off a guard or if a member was caught. Pilecki’s first reports to the Polish government and Allied forces left the camp with released prisoners. But when releases became less common, passing reports on to the outside world depended largely on the success of prisoner escapes, such as one that occurred on June 20, 1942 where four Poles managed to dress up as members of the SS, weapons and all, and steal an SS car which they boldly drove out of the main gate of the camp. A cobbled-together radio, built over the course of seven months as parts could be acquired, was used for a while in 1942 to transmit reports until “one of our fellow’s big mouth” resulted in the Nazis learning of the radio, forcing the group to dismantle it before they were caught red handed and executed. Pilecki’s reports were the first to mention the use of Zyklon B gas, a poisonous hydrogen cyanide gas, and gas chambers used at the camp. He saw the first use of Zyklon B gas in early September 1941 when the Nazis used it to kill 850 Soviet POWs and Poles in Block 11 of Auschwitz I. He also learned of the gas chambers at Auschwitz II, or Auschwitz-Birkenau, from other resistance members after construction of the camp began in October 1941. ZOW also managed to keep a pretty good running log of roughly the number of inmates being brought in to the camp and the estimated number of deaths, noting at one point, “Over a thousand a day from the new transports were gassed. The corpses were burnt in the new crematoria.” All of the reports were sent to the Polish Government in Exile in London, and they in turn forwarded the information to other Allied forces. However, on the whole, the Allies thought the reports of mass killings, starvation, brutal and systemic torture, gas chambers, medical experimentation, etc. were wildly exaggerated and questioned the reliability of Pilecki’s reports. (Note: During Pilecki’s nearly three years there, several hundred thousand people were killed at Auschwitz and, beyond the death and horrific tortures, countless others were experimented on in a variety of ways by such individuals as the “Angel of Death,” Dr. Josef Mengele. All total, it is estimated that somewhere between 1 to 1.5 million people were killed at the camp.) Significant doubt surrounding the accuracy of his reports meant Pilecki’s plan to bring about an uprising inside Auschwitz never came to fruition. Pilecki had managed to convince his network of resistance fighters inside the camp that they could successfully take control for a short while and escape if the Allies and Polish Underground provided support. He had envisioned airdrops of weapons and possibly even Allied soldiers invading the camp. However, the Allies never had any intention of such an operation and the local Polish resistance in Warsaw refused to attack due to the large number of German troops stationed nearby. The Nazi guards began systematically eliminating members of the ZOW resistance in 1943 and so, with his reports being ignored, Pilecki decided he needed to plead his case in person for intervention in Auschwitz. In April of 1943, he got his chance. After handing over leadership of ZOW to his top deputies, he and two others were assigned the night shift at a bakery which was located outside the camp’s perimeter fence. At an opportune moment on the night of the 26th, they managed to overpower a guard and cut the phone lines. The three men then made a run for it out of the back of the bakery. As they ran, Pilecki stated, “Shots were fired behind us. How fast we were running, it is hard to describe. We were tearing the air into rags by quick movements of our hands.” It should be noted that anyone caught helping an Auschwitz escapee would be killed along with the escaped prisoner, something the local populace knew well. Further, the 40 square kilometers around Auschwitz were extremely heavily patrolled and the escapees’ shaved heads, tattered clothes, and gaunt appearance would give them away in a second to anyone who saw them. Despite this, all three not only survived the initial escape, but managed to get to safety without being recaptured. Unfortunately, Pilecki’s plan to garner support for liberating Auschwitz never materialized. After arriving at the headquarters of the Home Army on August 25, 1943 and desperately pleading his case for the Home Army to put all efforts into liberating Auschwitz, he left feeling “bitter and disappointed” when the idea was discarded as being too risky. In his final report on Auschwitz, he further vented his frustration on his superiors “cowardliness.” After this, Pilecki continued to fight for the Home Army, as well as trying to aid ZOW in any way he could from the outside. He also played a role in the Warsaw Uprising that began in August of 1944, during which he was captured by German troops in October of that year and spent the rest of World War II as a POW. Pilecki wrote his final version of his report on Auschwitz (later published in a book titled:The Auschwitz Volunteer: Beyond Bravery) after the war while spending time in Italy under the 2nd Polish Corps before being ordered back to Poland by General Wladyslaw Anders to gather intelligence on communist activities in Poland. You see, the invading Germans had been replaced by another occupying power- the Soviet backed Polish Committee of National Liberation. This was a puppet provisional government setup on July 22, 1944 in opposition to the Polish Government in Exile, the latter of which was supported by the majority of Polish people and the West. During his two years at this post, he managed to, among many other things, gather documented proof that the voting results of the People’s Referendum of 1946 were heavily falsified by the communists. Unfortunately, there was little the Polish Government in Exile could do. Even when his cover was blown in July of 1946, Pilecki soldiered on and refused to leave the country, continuing his work collecting documented evidence of the many atrocities against the Polish people being committed by the Soviets and their puppet government in Poland. For this, he was ultimately arrested on May 7, 1947 by the Ministry of Public Security. He was extensively tortured for many months after, including having his fingernails ripped off and ribs and nose broken. He later told his wife of his life in this particular prison, “Oświęcim [Auschwitz] compared with them was just a trifle.” Finally, he was given a show trial. When fellow survivors of Auschwitz pled with then Prime Minister of Poland, Józef Cyrankiewicz (himself a survivor of Auschwitz and member of a resistance in the prison), for the release of Pilecki, instead he went the other way and wrote to the judge, telling him to throw out record of Pilecki’s time as a prisoner in Auschwitz. This was a key piece of evidence in Pilecki’s favor given one of the things he was being accused of was being a German collaborator during the war. And so it was that as part of a crackdown by the new Polish government against former members of the Home Army resistance, Pilecki was convicted of being a German collaborator and a spy for the West, among many other charges, ultimately sentenced to death via a gunshot to his head. The sentence was carried out on May 25, 1948 by Sergeant Piotr Smietanski, “The Butcher of Mokotow Prison.” From then on, mention of Pilecki’s name and numerous heroic acts were censored in Poland, something that wasn’t changed until 1989 when the communist Polish government was overthrown. Witold Pilecki’s last known words were reportedly, “Long live free Poland.” You might think it strange that Pilecki frequently, quite willingly, threw himself into incredibly dangerous situations despite the fact that he had a wife and kids back home. Polish actor Marek Probosz, who studied Pilecki extensively before portraying him in The Death of Captain Pilecki, stated of this, “Human beings were the most precious thing for Pilecki, and especially those who were oppressed. He would do anything to liberate them, to help them.” Mirroring this sentiment, Pilecki’s son, Andrzej later said his father “would write that we should live worthwhile lives, to respect others and nature. He wrote to my sister to watch out for every little ladybug, to not step on it but place it instead on a leaf because everything has been created for a reason. ‘Love nature.’ He instructed us like this in his letters.” It wasn’t just his children he taught to respect life at all levels. Two years after Pilecki was executed, and at a time when his family was struggling because of it, a man approached Pilecki’s teenage son and stated, “I was in prison [as a guard] with your father. I want to help you because your father was a saint.. Under his influence, I changed my life. I do not harm anyone anymore.” As mentioned, the real Tomasz Serafinski was not dead, as Pilecki had thought when he took his papers and assumed Tomasz’ identity to be captured. After Pilecki’s escape from Auschwitz, the real Tomasz was arrested on December 25, 1943 for having escaped from Auschwitz. He was then investigated for a few weeks, including a fair amount of pretty brutal strong arming, but was finally released on January 14, 1944 when it was determined he was not, in fact, the same individual who had escaped from Auschwitz. Afterwards, Pilecki and Tomasz actually became friends, and though Pilecki was killed, according to Jacek Pawlowicz, “That friendship is alive to this day, because Andrzej Pilecki visits their family and is very welcome there.” In the early 2000s, certain surviving officials who were involved in Pilecki’s trial, including the prosecutor, Czeslaw Lapinski, were put up on charges for being accomplices in the murder of Witold Pilecki. Pilecki also fought in WWI in the then newly formed Polish army. After that, he fought in the Polish-Soviet War (1919-1921). At one point while within Auschwitz, Pilecki and his fellow ZOW members managed to cultivate typhus and infect various SS-personnel. When Egypt bought the two Mistral-class amphibious assault ships that France declined to sell to Russia, one thing that didn’t come with those vessels was the armament. According to the “16th Edition of Combat Fleets of the World,” Russia had planned to install a mix of SA-N-8 missiles and AK-630 Gatling guns on the vessels if France has sold them to the Kremlin. But no such luck for Egypt, which had two valuable vessels that were unarmed – or, in the vernacular, sitting ducks. And then, all of a sudden, they weren’t unarmed anymore. A video released by the Egyptian Ministry of Defense celebrating the Cleopatra 2017 exercise with the French navy shows that the Egyptians have channeled MacGyver — the famed improviser most famously played by Richard Dean Anderson — to fix the problem. Scenes from the video show at least two AN/TWQ-1 Avenger air-defense vehicles — better known as the M1097 — tied down securely on the deck of one of the vessels, which have been named after Egyptian leaders Gamel Abdel Nasser and Anwar Sadat. The Humvee-based vehicles carry up to eight FIM-92 Stinger anti-air missiles and also have a M3P .50-caliber machine gun capable of firing up to 1200 rounds a minute. The Mistral-class ships in service with the French navy are typically equipped with the Simbad point-defense system. Ironically, the missile used in the Simbad is a man-portable SAM also called Mistral. The vessels displace 16,800 tons, have a top speed of 18.8 knots and can hold up to 16 helicopters and 900 troops. You can see the Egyptian Ministry of Defense video below, showing the tied-down Avengers serving as air-defense assets for the Egyptian navy’s Mistrals. Every professional athlete will tell you there’s a science behind elite performance. Every coach will tell you there’s one for team dynamics as well. And, every military leader will say their best performing units are men and women who understand the importance of not just bettering themselves, but constantly working toward improving the group as a whole. One Green Beret has cracked the code on understanding the battlefield and translating it to the professional playing field. Jason Van Camp is the founder of Mission Six Zero, a leadership development company focused on taking teams and corporate clients to the next level. “We have some of the best military leaders you’ve ever seen,” said VanCamp. From Medal of Honor recipients Flo Groberg and Leroy Petry, Green Beret turned Seattle Seahawk Nate Boyer, to plenty of Marines, Delta Force, Rangers and Navy SEALs, their team is stacked with experience. But that’s not where it ends. Van Camp has put research behind performance mechanisms with an equally impressive team of scientists to qualify their data and translate it into something teams can implement. One of the key factors to their success? “Deliberate discomfort,” said Van Camp. “Once you deliberately and voluntarily choose the harder path, good things will happen for you and for your team. You have to get comfortable with being uncomfortable.” The reviews of the program speak for itself. “I thought I knew where I stood in the football world,” said Marcel Reese, former NFL player. “But after my experience with Mission Six Zero, along with my team, I learned more than I could have ever imagined… mostly about myself as a teammate, leader and a man in general. I would strongly encourage all teams to work with these guys.” Van Camp shared a story about one of the teams he worked with. A player asked him if the workshop was really going to make him a better player. He responded, “It’s not about making you a better player, it’s making the guy to your left and to your right a better player.” Van Camp took his lessons and parlayed them into a book with the title reflecting their greatest theory: “Deliberate Discomfort.” Van Camp and 11 other decorated veterans take you through their experiences – intense, traumatic battles they fought and won, sharing the lessons learned from those incredible challenges. Jason and his cadre of scientists further break down those experiences, translating them into digestible and relatable action items, showing the average person how they can apply them to their own lives and businesses. The book is “gripping. Authentic. Engaging… prodigiously researched, carefully argued and gracefully written,” said Frank Abagnale, Jr., world-renowned authority on forgery (and also the author of Catch Me If You Can). It’s a heart-pounding read that will keep you turning the pages and wanting to immediately apply the lessons to your own life. In addition to writing books, running a company and being just a badass in general, Van Camp also has a soft spot in his heart for the veteran community. He founded Warrior Rising, a nonprofit that empowers U.S. military veterans and their immediate family members by providing them opportunities to create sustainable businesses, perpetuate the hiring of fellow American veterans, and earn their future. From the battlefield to the football field to the boardroom, with such an elite mission, it’s easy to see why Mission Six Zero is such an elite organization. One of the most effective hand-to-hand combat techniques taught today — and one that has become closely identified with the Jewish state that embraced it — Krav Maga was a product of the Nazi-era streets of pre-World War II Czechoslovakia. The martial art’s inventor, Imi Lichtenfeld was quite the athlete. Born in Budapest in 1910, he spent his early years training to be a boxer, wrestler, and gymnast with his father. The elder Lichtenfeld was also a policeman who taught self-defense. Under his father’s tutelage, Imi won championships in all his athletic disciplines. But fighting in a ring required both people to follow certain rules. Street fights don’t have rules, Imi Lichtenfeld thought, and he wanted to be prepared for that. At the end of the 1930s, anti-Semitic riots struck Bratislava, Czechoslovakia, where Imi and his family were then living. Like many large cities in the region, the rise of National Socialism, or Nazism, created an anti-Jewish fervor that took young men to the streets to assault innocent and often unsuspecting Jews. When the streets of his neighborhood became increasingly violent, Lichtenfeld decided to teach a group of his Jewish neighbors some self-defense moves. It came in the form of a technique that would help them protect themselves while attacking their opponent – a method that showed no mercy for those trying to kill the Chosen People. Young Imi taught his friends what would later be called “Krav Maga.” Translated as “contact-combat” in Hebrew, Krav Maga is designed to prepare the user for real-world situations. The martial art efficiently attacks an opponent’s most vulnerable areas to neutralize him as quickly as possible, uses everything in arm’s reach as a weapon, and teaches the user to be aware of every potential threat in the area. It developed into one of the most effective hand-to-hand techniques ever devised. Krav Maga’s widespread use began in the Israel Defence Force, who still train in the martial art. These days, Krav Maga is a go-to fighting style widely used by various military and law enforcement agencies. In 1930s Europe, it was a godsend. Lichtenfeld’s technique taught Bratislava’s Jews how to simultaneously attack and defend themselves while delivering maximum pain and punishment on their attackers. Imi Lichtenfeld escaped Europe in 1940 after the Nazis marched into Czechoslovakia. He arrived in the British Mandate of Palestine in 1942 (after considerable struggles along the way) and was quickly inducted into the Free Czech Legion of the British Army in North Africa. He served admirably and the Haganah and Palmach – Jewish paramilitary organizations that were forerunners of what we call today the Israel Defence Forces – noticed his combat skill right away. After Israel won its independence, Lichtenfeld gave his now-perfected martial art of Krav Maga to the IDF and became the Israeli Army’s chief hand-to-hand combat instructor. He even modified it for law enforcement and civilians. Lichtenfeld taught Krav Maga until 1987 when he retired from the IDF. He died in 1998, after essentially teaching the world’s Jewish population how to defend themselves when no one would do it for them.
New research shows program effective in educating parents about prevention of shaken baby syndromeMarch 2, 2009 in Medicine & Health / Other New studies in the United States and Canada show that educational materials aimed at preventing shaken baby syndrome increased knowledge of new mothers about infant crying, the most common trigger for people abusing babies by shaking them. The study of mothers in Seattle is featured in the March issue of Pediatrics, and a partner study in Vancouver, British Columbia appears this month in The Canadian Medical Association Journal. Each year in the United States, an estimated 1,300 infants are hospitalized or die from shaken baby syndrome. One in four babies will die as a result of their injuries, and among those who survive, approximately 80 percent will suffer brain injury, blindness and deafness, fractures, paralysis, cognitive and learning disabilities, or cerebral palsy. "Typically, crying begins within two weeks of birth so it's imperative that new parents receive information and learn coping strategies early," says Fred Rivara, MD, MPH and co-author of the Seattle study. Dr. Rivara is an investigator at the Harborview Injury Prevention and Research Center, and vice-chair of pediatrics at the University of Washington. Both studies were randomized controlled trials testing of "The Period of PURPLE Crying," an educational program that includes a 12-minute DVD and information booklet. In Seattle, Dr. Rivara was joined by Dr. Ronald Barr, lead author of both studies and director of community child health at the Child & Family Research Institute and professor of pediatrics in the Faculty of Medicine at the University of British Columbia. The Seattle study involved 2,738 mothers of new infants. Half the women enrolled in the study received the PURPLE materials while half received information on infant safety. Mothers who received the PURPLE materials scored six percent higher in knowledge about crying and one percent higher in knowledge about shaking. They were six percent more likely to share information with caregivers about strategies for coping with the frustration of infant crying, and seven percent more likely to warn caregivers of the dangers of shaking. Like their American counterparts, Vancouver mothers who received the PURPLE materials scored 6 percent higher in knowledge about crying, were 13 percent more likely to share information with caregivers about coping with inconsolable crying, 12.9 percent more likely to share information about the dangers of shaking, and 7.6 percent more likely to share information about crying. "Changing knowledge is a critical first step in changing behavior, and this is important public health work because the results show it's possible to change people's ideas about crying." said Dr. Barr. PURPLE materials are designed to teach parents that crying is normal and frustrating for caregivers, and they list the following features as typical: • Peak pattern, when crying increases, peaks in second month, then declines • Unexpected timing of prolonged crying • Resistance to soothing • Pain-like look on the face • Long crying bouts • Evening and late afternoon clustering Source: University of Washington - Harborview Medical Center "New research shows program effective in educating parents about prevention of shaken baby syndrome" March 2, 2009 http://phys.org/news/2009-03-effective-parents-shaken-baby-syndrome.html
Gordon Mumbo grew up in the small village of Kamuga, in Kenya’s Kisumu County. Year after year, he watched as frequent floods from one of Kenya’s major rivers, the Nyando, disturbed the peaceful flow of village life. “In school we were reading about how the Dutch were able to control floods and reclaim land,” Mumbo says. “So I grew up wanting to be a water engineer and solve the flooding.” After more than thirty years and two degrees in water, Mumbo has lived out that dream, enjoying a long career in water engineering. His work has taken him from Kenya’s capital of Nairobi to Honolulu, Hawaii to the banks of the Nile River. But now, for the first time since his childhood, Mumbo has returned to his birthplace: Kisumu County. Mumbo’s current work as a regional Team Leader for the Sustainable Water Partnership (SWP), USAID’s flagship water security program, deals not with the Nyando River, but with the Mara. Like its sister the Nyando, the Mara River empties into Lake Victoria – but the path it takes there is a bit more complicated. Beginning at the Mau Escarpment – an impressive cliff along the western edge of Kenya’s Great Rift Valley – it meanders through Kenya and its neighbor Tanzania, contributing to food production, economic security and even tourism for both countries. But unlike the Nyando of Mumbo’s childhood, the Mara isn’t overflowing – it’s at risk of running dry. The reasons why are manifold. Destruction of the Mau Forest, which stores much of the river basin’s water, depletes the region’s supply. Population growth and economic development increase demand for food and water, which leads to land use change. “People around the Mara on the Kenya side were typically pastoralists, but this is changing,” Mumbo explains. “They are getting more into agriculture.” But as climate change and variability make rainfall inconsistent and unpredictable, many are turning to irrigated agriculture – which, again, puts a strain on the Mara’s water availability. “This activity is really coming at the right time to address these issues, before they get out of control,” Mumbo says. The activity he refers to, of course, is the Sustainable Water for the Mara effort, SWP’s three-year project in the basin. The activity comes on the heels of a historic Memorandum of Understanding between Kenya and Tanzania, agreeing to cooperate on the basin’s management. This step is not only groundbreaking but rare, as Mumbo knows well. Before coming to SWP, he worked with the United Nations in the Nile River Basin, attempting to bring nine countries together to collaborate. “It’s sometimes very difficult to get different countries to think together,” he says. “We struggled for six years to have a cooperative agreement among the countries. But it’s still dragging.” As a child, Gordon observed how disruptive floods were to his peaceful village. He wanted to solve the problem. So he became a #water engineer. His story: https://t.co/2ACOEinoTg #WWWeek @USAIDKenya @WinrockIntl pic.twitter.com/CPz0eDl7vE — USAID (@USAID) August 30, 2018 But in Kenya and Tanzania, the spirit of collaboration is palpable. They have agreed to develop a transboundary Water Allocation Plan, which will simplify the process of managing the Mara. “When you come to the Mara, it’s so critical, and the two countries can clearly see that they need to sustain this river,” Mumbo says. “So Kenya and Tanzania coming together to set out the framework on how to manage the basin, it’s really a good move.” It’s not just the atmosphere of transboundary cooperation that sets SWP’s work apart – it’s also the sense of ownership Mumbo and his team are cultivating in the people of the Mara, from community members to government officials to private sector representatives. “I think the work we are going to do in the Mara could be a model that could be copied in many basins,” Mumbo says. “And really the model will be the community – the stakeholders – being in the driving seats in managing the resource.” As Team Leader, Mumbo is already shoring up efforts to build this sense of ownership. The team is moving into grassroots populations, working with community members to learn what they see as the Mara’s greatest barriers to water security. According to Mumbo, this knowledge-sharing exercise is a two-way street; SWP educates communities on water risk and conservation, while the communities provide invaluable local perspective. “The river belongs to the people who live along it,” Mumbo concludes. “They understand the river better than anybody else. They will be able to own it and work with you at sustaining it. If you want to manage the river, you must involve the people.” In Mumbo’s childhood home of Kamuga, dikes now stand along the banks of the Nyando River, shielding the community from the floods that once threatened it. He didn’t build them himself, but Mumbo’s work in the Mara is building something else: a legacy of cooperation for a more water-secure world. To learn more about Mumbo and his work in the Mara, watch a recording of SWP’s Sofa Session at World Water Week here. Photos by Bobby Neptune.
Black History Month, or African American History Month, celebrated each February, is a time to commemorate the achievements and cultural contributions of African Americans. It is also a time to recognize the health disparities among African Americans versus whites. Communities of color are more likely than white communities to suffer from socioeconomic inequities, including exclusion from health resources, educational opportunities, and social and economic resources. These exclusions can lead to poor physical health outcomes for black Americans. The COVID-19 pandemic has further shined a light on the disparities among racial groups. For instance, according to the Centers for Disease Control and Prevention (CDC), African American and Hispanic or Latino people are 4.7 times more likely than non-Hispanic whites to be hospitalized for COVID-19. COVID-19 may be a more serious problem for the African American population due to the following factors: - Underlying health conditions - Dense living conditions - Employment in the service industry - Employment as essential workers - Limited access to health care Although the death rate for African Americans overall has declined by 25 percent since the early 2000s, the CDC reports that African Americans are still more likely to have a shorter lifespan than other races in the United States. In addition, diseases and conditions found in later life among whites tend to appear at younger ages in the African American population. Among those are: The health problems affecting a greater percentage of black Americans than whites include: - heart disease - kidney disease Social Factors and Health Social factors that have a negative effect on health also are more common among African Americans than whites, such as: - Not owning a home - Inaccessibility to quality health care - Lack of health-care coverage or experiencing gaps in coverage - Inactive lifestyle Detailed information can be found on the following topics: - Health Insurance: The Affordable Care Act (ACA), signed into law by President Barack Obama in 2010, has helped decrease the number of uninsured African Americans by 2.8 million people, but the high cost of coverage still has kept many people from becoming insured. For information on obtaining health insurance in California at reduced rates or for free, visit the Covered California website or call 866-761-4165. For those 65 and older, visit the Medicare website or call 800-MEDICARE (800-633-4227) to apply for coverage or to learn more about Medicare benefits and enrollment. - Health Equity: For information on issues of health equity, visit the CDC website https://www.cdc.gov/healthequity/features/african-american-history/index.html , which contains a list of common health issues affecting African Americans and ways to live a healthier life. For an extensive list of resources by health topic, visit the Food and Drug Administration’s (FDA) Minority Health Resources website. - Advocacy: The California Black Health Network (CBHN) is a black-led, statewide organization dedicated to advocating for health equity for black immigrants and African Americans. It offers health information, webinars, health insurance information, and legislation of interest to the black community. Visit the CBHN website to learn more. Herrick Library Resources Honoring African Americans The following books are in the Herrick Library collection and can be reserved and checked out via curbside pickup or through the OverDrive app, where indicated*: - Believing in Magic: My Story of Love, Overcoming Adversity, and Keeping the Faith, by Cookie Johnson - Black Man in a White Coat: A Doctor’s Reflections on Race and Medicine, by Damon Tweedy - Cross-Cultural Medicine, edited by JudyAnn Bigby (medical DVD) - Every Day I Fight, by Stuart Scott - *Game Changer: John McLendon and the Secret Game, by John Coy - I Beat the Odds: From Homelessness to the Blind Side and Beyond, by Michael Oher - The Immortal Life of Henrietta Lacks, by Rebecca Skloot - *Muhammad Ali: The Greatest, by Matt Doeden - *My Family Celebrates Kwanzaa, by Lisa Bullard - *The Unapologetic Guide to Black Mental Health: Navigate an Unequal System, Learn Tools for Emotional Wellness, and Get the Help You Deserve, by Rheeda Walker - The Doctor with an Eye for Eyes: The Story of Dr. Patricia Bath, by Julia Finley Mosca (picture book) - Hair Love: A Celebration of Daddies and Daughters Everywhere, by Matthew A Cherry (picture book) - *The Vast Wonder of the World: Biologist Ernest Everett Just, by Melina Mangal (picture book) - Skin Like Mine, by LaTashia Perry (picture book) - Whoosh!: Lonnie Johnson’s Super-Soaking Stream of Inventions, by Chris Barton (picture book) Sources: American Medical Association, 7 Ways to Improve Black Health—In Mind and Body, https://www.ama-assn.org/delivering-care/population-care/7-ways-improve-black-health-mind-and-body; CDC, Health Equity: Celebrate African American History Month!, https://www.cdc.gov/healthequity/features/african-american-history/index.html; CDC, Vital Signs: African American Health, https://www.cdc.gov/vitalsigns/aahealth/index.html; Mayo Clinic, Coronavirus Infection by Race: What’s Behind the Health Disparities?, https://www.mayoclinic.org/diseases-conditions/coronavirus/expert-answers/coronavirus-infection-by-race/faq-20488802; Pfizer, Health Disparities Among African Americans, https://www.pfizer.com/news/hot-topics/health_disparities_among_african_americans Graphics: Can Stock Photo
The Bill of Rights refers to the first ten Amendments to the United States Constitution. The Second Amendment codifies a citizen’s right to keep and bear arms. This amendment has regulated the actions of the police because it’s made them hesitant to take away people’s guns even when there’s justifiable reason to do so. For example, in the summer of 2019, a mom called the police in El Paso, Texas. She was worried about her son and the firearms that he owned. The police, citing the law, did not take action. Weeks later, the son opened fire at a local Walmart. He killed 22 people and injured more than 20 others. As for the Supreme Court, they’ve made it more difficult for police to look through the cellphone of a person under arrest. In 2014’s Riley v. California, the Supreme Court ruled that police should first secure a warrant before they go through an apprehended person’s cellphone. When it comes to seizures and fines, the Supreme Court has issued rulings that make it more difficult to levy harsh financial penalties against people and seize their assets. In 2019’s Timbs v. Indiana, the Supreme Court issued a verdict that prevented the nation’s police departments from excessively fining people or rapaciously taking their property. For some commentators, Timbs v. Indiana represents a change in philosophy when it comes to the Supreme Court. It signals that they are ready to tackle police overreach and confront policies affiliated with America’s history of racism.
The United Nations Children Fund (UNICEF) organized a hackathon, dubbed “Partnership with youth to achieve the Sustainable Development goals”, the competition sought to explore and develop solution-oriented approaches in tackling pertinent issues around the SDGs and focussing on these SDG areas; 3 (Health), 4 (Education), 5 (Gender Equality), 8 (Employment) and 6 (Water and Sanitation). The program also aimed at bringing to bare how young people are using the tools available at their disposal to solve social problems and achieve sustainable development goals. The initiative held in Tamale, Ghana provided a platform for young people with relevant technical skills in the fields of computer programming, software developers, website creation, graphic design, engineering, and related fields to create solutions and social interventions. The UN agency believes that with education, skills, and empowerment, these young people would help transform economies and nations; but a fast-changing global economy demands increasingly specialized and innovative skills at a time when many education systems are struggling, -a challenge the hackathon seeks to solve by inspiring and recognizing innovations developed by young people providing solutions to social problems and helping in the achievement of the Sustainable Development Goals The 2-day program at HOPin Academy (Tamale-Ghana), groomed young innovators assisting them through utilizing strategic design thinking approaches and enabling them to better understand the SDGs they have chosen. Groups of Participants pitched their innovations to a team of judges, selected teams (Health Maniachs, SaniTech, and Techducation) would later compete in the National Hackathon Challenge held in Accra, Ghana. SaniTech emerged second in the National Hackathon Challenge with the following innovations: Bricks made of plastic: The bricks were made from mixing sand and 1500 pieces of melted plastics (mostly water sachets) to form quality solid bricks for building. Pillows made out of empty water sachets: Plastic materials consisting of empty water sachets shredded to soften and then filled into beautiful wrapped African fabric to make pillowcases. Smart Bin: An automated bin that opens upon approach saving individuals the efforts of dealing with the inconvenience that comes with opening a bin yourself when your hands are full. SaniTech’s innovation sought to make the environment cleaner whilst also providing quality services that saved individuals time and efforts.
Some cities are sinking due to increasing sea levels slowly encroaching on their coasts, while others are sinking because of excessive groundwater pumping that creates a change in pressure and volume that causes land to sink. Here are 11 sinking cities that are in danger of disappearing. 1. Jakarta, Indonesia Jakarta is sinking up to 6.7 inches per year due to excessive groundwater pumping (which creates a change in pressure and volume that causes the land to sink). Much of the city could be underwater by 2050. The Indonesian government recently approved a plan to move the capital 100 miles away from its current location on the island of Java in order to protect its 10 million residents from more flooding. The move would take about 10 years and cost $33 billion. 2. Lagos, Nigeria Lagos' low coastline continues to erode, and rising seas caused by global warming put Africa's largest city in danger of flooding. A 2012 study from the University of Plymouth found that a sea level rise of three to nine feet would " have a catastrophic effect on the human activities in these regions." Global sea levels are expected to rise 6.6 feet by the end of this century. 3. Houston, Texas Parts of Houston are sinking at a rate of 2 inches per year due to excessive groundwater pumping. 4. Dhaka, Bangladesh Bangladesh produces 0.3% of the emissions that contribute to climate change, but the country is facing some of the biggest consequences of rising sea levels, according to The New York Times. Oceans could flood 17% of Bangladesh's land and displace about 18 million of its citizens by 2050. 5. Venice, Italy Venice is sinking at a rate of 0.08 inches every year. Italy began building a flood barrier consisting of 78 gates across its three inlets in 2003. It's known as Mose. The barrier was supposed to be completed in 2011, but will likely not be ready until 2022. When a series of storms hit Venice in 2018, the $6.5 billion project was still incomplete. The flooding was the worst the city had seen in a decade. 6. Virginia Beach, Virginia Virginia Beach has one of the fastest rates of sea-level rising on the East Coast, factoring in both rising water levels and sinking land. The National Oceanic and Atmospheric Administration estimates that Virginia Beach could experience up to nearly 12 feet of sea level rise by 2100. 7. Bangkok, Thailand Bangkok is sinking at a rate of more than 1 centimeter a year and could be below sea level by 2030, according to The Guardian. To help prevent flooding, especially during Thailand's summer rainy season, an architecture firm built an 11-acre park that can hold up to 1 million gallons of rainwater called Chulalongkorn University Centenary Park. 8. New Orleans, Louisiana Parts of New Orleans are sinking at a rate of 2 inches per year and could be underwater by 2100, according to a 2016 NASA study. Some parts of New Orleans are also 15 feet below sea level, and its location on a river delta increases its exposure to sea-level rise and flooding. 9. Rotterdam, The Netherlands According to The New York Times, 90% of the city of Rotterdam is below sea level. As ocean levels rise, the risk of flooding increases. Like Bangkok's Chulalongkorn University Centenary Park, the Dutch have built "water parks" that double as reservoirs for the swelling water levels in a project called Room for the River, as well as enormous storm surge barriers. 10. Alexandria, Egypt Alexandria's beaches have been disappearing as sea levels continue to rise. The Mediterranean Sea could rise as much as 2 feet by 2100, according to NPR. 11. Miami, Florida Environmental author Jeff Goodell previously told Business Insider that " there's virtually no scenario under which you can imagine [Miami] existing at the end of the century" and referred to it as "the poster child for a major city in big trouble." Miami's sea levels are rising at faster rates than in other areas of the world, resulting in floods, contaminated drinking water, and major damage to homes and roads. The city may soon have to raise its structures to stay above water.
Note from Jan: More good news about enzymes. This is why we carry Plant Enzymes. We have seen the difference in our personal health. You will also notice that Plant Enzymes carry many of the suggested enzymes. Article continues ... The body is a cauldron of chemical reactions, and each such reaction requires the services of an enzyme. In the natural products marketplace, enzymes have become known as little facilitators of digestion, breaking down nutrients into their usable parts. Very specific in nature, each enzyme acts on a specific type of chemical, called a substrate, in what is commonly referred to as a lock-and-key relationship—the enzyme’s shape fits only its intended substrate. This is why people with insufficient production of the enzyme lactase struggle to breakdown lactose after ingesting dairy products. By nature, many of the foods we eat contain the enzymes necessary to digest that food. However, many variables, such as heating, can deplete these food-borne enzymes. Also, while the body makes its own menu of enzymes, this production can be negatively affected by aging and other health factors. Supplementation has become one favored method of ensuring the proper enzyme mix in the body; these enzymes can be sourced from animals, plants, fungi or microbes. Digestive health supplements have dominated the enzyme marketplace, but mounting evidence on therapeutic uses of enzymes, as well as growing demand for functional products is expanding this market to new frontiers. It all starts with digestion. For the most part, digestive enzymes hydrolyze (use water to break down) nutrient substrates. In simple categories, amylase enzymes break down carbohydrate substrates, lipases handles fats and proteases take care of proteins; other types include cellulases, pectinases and xylanases. The proteases, also known as proteolytic enzymes, include the pancreatic enzymes chymotrypsin and trypsin, as well as the plant-derived bromelain from pineapple and papain from papaya. Saliva in the oral cavity initiates digestion by secreting amylase for breaking starch into simple sugars and lipase for fat breakdown. In the stomach, the protease pepsin is secreted and works on food proteins, breaking them into peptides and amino acids. Then, amylase and lipase, in addition to the endogenous proteases, are secreted in the pancreas, which supplies enzymes to the small intestines for further digestion. Here, lipase breaks triglycerides into fatty acids and amylases reduce dietary starches into various saccharides that are then turned into glucose by other enzymes. Here, chymotrypsin hydrolyzes amino acids such as tryptophan and tyrosine, resulting in smaller peptides, while trypsin tackles aminos including arginine and lysine. Other pancreatic proteases further break down these dietary nutrients. People may focus on one area of digestion that troubles them, such as indigestion due to lactose intolerance. For this, lactase supplementation has become a popular remedy. In populations where meat is a staple of the diet, help with digestion of meat proteins is desired. Papain and bromelain are both meat tenderizers, due to their ability to handle tough meat fibers, and they are also popular after-meal enzyme supplements. However, there are small amounts of other enzymes in papaya, including an amylase (for starch digestion) and a lipase (for fat digestion). The Plant Enzymes includes amylase and protease to enhance the digestive function of the product. A preference for a range of enzymes for improved digestion is taking shape. There are three major advantages to enzyme blends in digestive health. The first advantage is that the molecules targeted for reduction by enzymes are often complex, so it may be more efficient and rapid to target multiple bonds for degradation with different enzymes rather than a single bond with a single enzyme. The second advantage is that the molecule targeted for reduction by an enzyme rarely is consumed by itself; usually it is embedded in a matrix of other foods, so adding additional enzymes to break down those other interfering foods in theory should allow the main enzyme for the target molecule(s) to work more efficiently. The third advantage is in some cases, the degradation products of the target molecules are sometimes as much of a problem or more of a problem in digestive health than the targeted molecule, so to produce the desired outcome, it is necessary to add enzymes to break down both the targeted molecule and these degradation products. The Plant Enzyme formula containing 9 plant based enzymes that help digest protein, fats, carbohydrates, starches, sugars and all foods. As a core product, Plant Enzymes is what enables all of our other products to be so effective. It is important to understand that the best way to achieve optimal health is by having high enzyme levels in the body and conserving the body's enzyme levels. When the body has an abundance of enzymes, it can protect itself and repair the damage from virtually all degenerative diseases. Enzymes are a key piece of the puzzle of life, because they make all the other pieces work. Enzymes are the very life force that activates vitamins, minerals, proteins and other physical components within the body, furthermore, they can't do their work without enzymes! Most health products focus on giving the body the materials it needs. We need water, vitamins and ionic minerals. We also need to give the workforce of the body back its power and ability to function more effectively. Our Plant Enzymes take the drain off the workforce and give them the chance to regain their strength and focus. Taking the Plant Enzymes will help the body develop a stronger disease-fighting capability and increase the body's ability to mend and heal itself. As a matter of fact, digestive enzymes can do as much or more for the human body than any other health product available! The original digestive enzyme product was probably basic animal pancreatin, which is a blend of several proteolytic enzymes, as well as lipase and amylase. The introduction of microbial enzymes enabled more variety in enzymatic activity, and a variety of these enzymes are usually combined. There are still some targeted products that contain only one or two enzymes, such as lactase, which is marketed to manage lactose intolerance, or alpha-galactosidase, sold as the familiar BEANO® product [from GlaxoSmithKline] and used to control gas and bloating. People are assuming a diet that emphasizes one macronutrient over the others. Enzyme blends that focus on protein digestion, for example, are becoming more commonplace to facilitate the optimal digestion of a diet that is high in protein. Where lactose intolerance has been well-addressed in digestive enzymes, gluten intolerance is only beginning to pick up steam. In late 2010, scientists from the University of Salermo and the European Laboratory for the Investigation of Food-Induced Diseases at the University of Naples Frederico II in Salermo, Italy, published a report in Enzymes Research on the various methods being explored to use enzymes in the detoxification of gluten and its protein constituent gliadin. Enzymes can break down the peptides from gluten that are resistant to the body’s endogenous proteases in people with gluten intolerance. However, these enzymes can be deactivated in the acidic stomach environment; encapsulating these enzymes does little good, as they can not efficiently break down gluten peptides before they reach the small intestines, where they cause the most trouble. A prolyl endoprotease, derived from food-grade Aspergillus niger, has been found to degrade gluten peptides and intact gluten proteins in the stomach. This endoprotease has an optimal pH for the stomach environment and is resistant to degradation from pepsin. In addition, a lyophilized powder combination of a glutamine-specific endoprotease and a propyl-specific endoprotease from Sphingomonas capsulate also survives the stomach and detoxifies the gluten peptides before they reach the intestines. Enzymatic approaches to gluten intolerance are certainly attracting innovative researchers, as are theories on systemic mechanisms and benefits from enzyme supplementation. In the immune system, enzymes are prized for their ability to break down proteins—many would-be pathogens, such as bacteria and the outer shell of viruses, are protein-based. Proteases can break down such protein invaders to the body, but they can also help activate immune cells. In 2001, scientists from the London School of Hygiene and Tropical Medicine detailed their in vitro and in vivo research on bromelain and immune response. The mixture of cysteine proteases enhanced T cell and B cell immune response, including antibody stimulation. They noted bromelain was able to both enhance and inhibit T cells, indicating a possible role in immune management. Their other research work showed bromelain can stimulate macrophages and natural killer (NK) cells. Further, German researchers reported in 2005 an in vitro study of bromelain and trypsin on human mononuclear cells revealed the pineapple protease mix, but not trypsin, activated monocytes and macrophages—two early response immune cells—as well as interferon (“interferes" with pathogens) and TNF-alpha (destroys cells). They noted bromelain achieves these effects independently from the underlying disease and may therefore stimulate both the innate and adaptive immune responses. Proteases can also inhibit immune system malfunctioning. For example, when an antibody attaches to an antigen (the substance triggering the antibody response), it forms an immune complex. These complexes can lead to the destruction of the antigen, but they can also deposit in the organs and lead to autoimmune diseases, including systemic lupus, rheumatoid arthritis and Sjögren’s syndrome. However, proteases can cleave immune complexes, neutralizing this autoimmune threat. In 2002, Danish researchers tested several proteases—trypsin, papain, chymotrypsin and bromelain—on immune reactivity in both pre-diabetics and patients with a recent onset of type 1 diabetes. The proteases showed immunomodulatory actions indicating a potential role in the management of chronic inflammatory diseases. University School of Connecticut researchers published results in 2008 showing oral treatment with bromelain had a therapeutic effect in an animal model of asthma and may similarly affect human asthma. The enzyme appeared to influence inflammation in the airway. Bromelain has also demonstrated anti-tumoral and anti-leukemic activity, in addition to decreasing the number of lung metastasis in one animal trial.7 University of Oxford, England, researchers reviewed the literature on bromelain and cancer, suggesting several possible mechanisms, including a direct impact on cancer cells and their microenvironment, as well as in the modulation of immune, inflammatory and haemostatic (anti-hemmorrhagic) systems. The effect of protease on the flow and health of blood factors occurs not only in cancer, but also in cardiovascular health. Excess fibrin can promote inflammation and plaque formation in blood and lymph vessels, blood clots and hardened tissue around varicose veins. Many proteases are considered fibrinolytic, including the primary blood-born enzymes plasmin and thrombin. The fibrinolytic system is involved in thrombosis, arteriosclerosis, endometriosis and cancer. Early studies showed bromelain therapy promotes the conversion of plasminogen to plasmin, thereby increasing fibrinolysis, the process of breaking down fibrin. Bromelain can help eliminate thrombosis in heart patients, utilizing fibrinolytic and anti-thrombotic actions, and reduce human platelet aggregation and their adhesion to endothelial (blood vessel) cells. Nattokinase, a protease derived from natto (fermented soy), has also demonstrated fibrinolytic and thrombolytic properties. Japanese researchers have led the way on this research, finding nattokinase cleaves directly cross-linked fibrin in vitro and is substantially more effective at restoring blood flow via thrombolytic activity than plasmin. The research and discovery of enzymes on fibrin, thrombosis and vascular health is ongoing, as other fibrinolytic enzymes from natto have been recently identified. A 2007 Japanese study indicated a novel protease from fermented soy was superior to nattokinase in dissolving fibrin when absorbed into the blood. And, fibrinolytic enzymes recently have been discovered and extracted from all sorts of sources including mushrooms, chives and viper venom. Serratiopeptidase, which helps silkworms digest their cocoons, is another fibrinolytic enzyme eyed for systemic benefits. While the research is scant and relatively inconclusive, early results showed serratiopeptidase is anti-inflammatory and anti-edemic, and recent evidence suggests this enzyme can reduce swelling and pain due to surgery. Superoxide dismutase (SOD), derived from cantaloupe, can be another powerful enzymatic tool against systemic disease, but it struggles to survive the early GI tract. However, recent technology has combined SOD with wheat gliadin biopolymer to provide protection in the GI. With this protection, SOD has demonstrated various antioxidant benefits. A 2005 report in Phytotherapy Research noted SOD supplementation promotes cellular antioxidant status and protects against oxidative stress-induced cell death. Other research showed GliSODin protects against DNA damage, ultraviolet radiation damage to skin and stress-induced cognitive impairment. Further, researchers studying serum SOD activity and risk factors of atherosclerosis and intima-media thickness (arterial wall thickening due to oxidized low-density lipoprotein [LDL] cholesterol trapped in the wall) have found low serum SOD levels correlated with metabolic syndrome and intima thickness. Enrich Gifts recommends enzymes for systemic health benefits be taken on an empty stomach, so they are not used for digesting food and can, therefore, reach the other parts of the body where they can be systemically beneficial. As stated, enzymes can come from the diet or from production in the body, but many factors can impact enzyme survivability. Heat and premature activation are two big factors that can degrade or denature enzymes. This makes any processed dietary product a challenge. Humidity can lead to early activation, which makes functional beverages or any liquid-based product difficult to produce. There is also a risk in both formulated products and the body that the enzyme could encounter its proper substrate and engage it sooner than desired; some nutritional compounds, including drugs, that can mimic the shape of a given substrate and use the enzyme before it accomplishes its designed mission. With increasing advances in coating capabilities, the use of orally delivered enzymes for more novel applications will certainly grow. Such technology is still in its infancy and is inspired by the desire to incorporate the benefits of supplemental enzymes in conventional foods and beverages. Even now, there are beginning to be products that are timed-released to deliver specific enzymes to select points in the digestive tract to maximize their digestive capability. Products sold for systemic purposes where maximum enzymatic activity needs to be available in the small intestine for intestinal absorption can profit substantially from this type of coating system. These products go beyond standard enteric coating, which has its drawbacks as far as predictable release of the active enzyme. Education dominates the arena of enzymes. While many loyal, longtime customers may be in-tune to the actions and benefits of enzymes, new customers might have a more limited knowledge and want to learn more about how enzymes work. Enriching Gifts is determined to improve consumer knowledge on the use and benefits of digestive enzymes. This is one of the reasons we are so focused on our Wednesday evening tele-conferences. The nice advantage of digestive enzymes, compared to other dietary supplements, is they can deliver noticeable benefits in a short period of time. Many people feel results on the first meal or first couple of meals. The enzymes are working immediately on contact with food. On the other hand, much like other therapeutic supplements, systemic enzymes usually require more time to create responses people require. But, once results are achieved typically in days to a couple of weeks, real positive changes begin to result to that person. Those benefits remain in many cases temporarily even after the product is discontinued. Excerpted from an article by Steve Myers
We live in a world of uncertainties, necessitating the need to prepare ourselves for several “what-if” scenarios. The same is true when trying to secure our IT solutions – we have to think through different possibilities and ensure implementation of effective mechanisms. Blockchain is a technology that brings in some inherent security features to ensure the integrity of transactions and related data. Each blockchain protocol uses a different consensus mechanism to ensure the sanctity of the shared ledger. Blockchain relies heavily on public-private key infrastructure and cryptography to authenticate and securely handle transactions submitted by different parties. This raises some interesting “what-if” questions: - What if your private keys get stolen and misused for signing blockchain transactions? - What if the hacker impersonates your identity and posts transactions to blockchain on your behalf? - What if you want to participate in a transaction but don’t want to reveal or share confidential information that represents your identity? Security is a vast field and there can be several solutions, each with its own benefits and trade-offs. We will describe one such technique called “Secured Multi-Party Computation”, which is aligned to the decentralized and distributed model of blockchain. What is Secured Multi-Party Computation? In a democratic world, we rely on mechanisms in which all concerned parties are consulted and heard before important decisions are taken. Multi-Party Computation (MPC) imbibes this philosophy in which two or more parties jointly compute an output by combining their individual inputs. The combined computed output could be used for taking important actions such as executing transactions on blockchain. MPC also ensures that the private inputs of each party are kept confidential, thus adding another dimension of Zero Knowledge Proof (ZKP) as described in one of my earlier blogs Establishing Blockchain Privacy through Zero Knowledge Proof. MPC solutions must adhere to two main principles: - Input privacy – the private data held by parties collaborating to build a combined output cannot be inferred or deduced - Correctness – the output obtained is always correct and parties should not be able to influence an incorrect output MPC works on the assumption that all concerned parties can communicate on a secured and reliable channel. Each party exchanges an encrypted version of their private input, which undergoes computational operations to build the desired output. MPC systems also need to consider that certain parties can be dishonest (adversaries) and the implementation complexity is directly proportional to the type of adversaries (partially or fully dishonest) expected in a particular use case. How can MPC help secure blockchain solutions? Some of the key use cases where MPC can help enhance the security and privacy of blockchain-based solutions are: - Protecting identity wallets – Blockchain transactions are signed by the end users using their private keys, which represent the identity of the person or entity who is submitting the transaction. The loss or theft of the private key can have a huge impact. And that’s where MPC can help - by sharding of keys and reconstructing it dynamically by combining the input of all parties. So, even if one party is compromised, the blockchain transaction can’t be executed using that shard alone. This approach is more secure than using HSM (Hardware Security Module), which is used to store and protect private keys. - High-value transactions – There are several scenarios that involve high-value transactions in which multiple parties must provide their consent/approval before the transaction is executed. MPC can be used in tandem with this approval workflow to ensure that the output constructed from each participant’s private input will be required to process the transaction on blockchain. This can be further augmented by taking “M of N” approach where at least M participants out of a total of N need to provide their private input. Another alternative of this approach is to use multi-sig (multiple signature addresses), which is available in a few blockchain protocols. However, MPC is entirely a software-based solution and is platform agnostic. - Transaction privacy & confidentiality – Typically, blockchain protocols rely on broadcasting of transactions to all participating nodes for obtaining consensus and distributing the copy of ledger. In certain scenarios, that involves confidential data and/or computations, where this model can pose challenges. Such transactions can be offloaded from blockchain and processed via MPC and the transaction receipt is captured on blockchain as proof, which can be verified at any point. When blockchain and MPC come together MPC provides a model to enable privacy and distributed trust to secure blockchain solutions. Implementing MPC using blockchain can ensure that all MPC transactions are recorded as timestamped source of truth on blockchain. Blockchain also introduces fairness as the output computed by MPC that can be published on the shared ledger to ensure all participants receive it simultaneously. Let’s consider a real-life use case of reserved or sealed bid auctions in which each bidding party can submit multiple bids till the auction ends. Each bid has confidential information such as the bid amount, which can’t be revealed to other participants during or even after the auction. Over the last few years, MPC has been leveraged for solving this type of use case but blockchain can be introduced to bring in fairness and transparency. Here is how a system with blockchain and MPC will work: - All participants and the MPC module will members of a blockchain network, either as individual nodes or interact with blockchain via their dedicated identity wallet applications. - MPC module will generate a random string for each participant, and encrypt it using each participant’s public key and publish it on blockchain. - Each participant receives the encrypted string via smart contract events. They will decrypt the string using their private keys and use it to mask their bid amount. - The masked bid amount will be encrypted using MPC’s public key and will be published on blockchain so that the action of bid submission is timestamped and recorded immutably. - MPC module receives the masked bids from all participants via smart contract events. - MPC module performs computations to determine the highest bid by cut-off time. - MPC module creates an encryption key by combining each participant’s encryption string. This encryption key is used to encode the auction result. - The encoded result of auction will be received by participants on real-time basis via smart contracts. - All participants who have the earlier encrypted strings published on blockchain will be able to decode the result of the auction. The above sequence of actions ensures that all auction related activities are recorded on blockchain for complete transparency. The MPC module ensures that the confidential bid amount is not revealed and only authorized participants of blockchain are able to transact, and malicious usage is prevented. Toward more secure and transparent transactions Secured Multi-Party Computation and blockchain are technologies that have inherent capabilities of supporting a distributed, multi-party ecosystem. MPC provides certain security and privacy features which are missing in some of the blockchain protocols, whereas blockchain provides a level playing field in which the MPC transactions themselves have an immutable representation. In recent years, MPC has evolved to support efficient computations and has been cited by Gartner but the awareness of its true potential and large-scale adoption is yet to happen. There are plenty of ways to enhance security & privacy of blockchain solutions. Looking for more information? Reach out to us @ [email protected]
Category Archives: 8th Grade Daily Tasks It has been a wonderful school year. Enjoy your summer break. Be safe and have fun! NO HOMEWORK! Happy Tuesday! Today’s schedule was modified due to awards assemblies and field day. Therefore, I only saw two classes today and we finished watching The Outsiders. NO HOMEWORK! HAPPY MONDAY! Today each class continued to watch The Outsiders. NO HOMEWORK! Hello! Today we created graphic organizers to help us prepare for our literature benchmark on tomorrow. Please study over all of your notes, homework, and classwork on the following topics: Science fiction vs. Fantasy, Ender’s Game, The Diary of Anne … Continue reading Happy Monday! Today we checked and discussed Friday’s homework (ch. 11-12). We thoroughly reviewed The Outsiders in preparation for our Benchmark test on Wednesday, May 16, 2012. Students were reminded to bring in their literature books tomorrow. If literature books … Continue reading Happy Friday! Today we completed a reading quiz on ch. 9-10 and checked our ch. 9-10 homework. Class time was also given for students to work on their “Nothing Gold Can Stay” illustrations: Nothing Gold Can Stay Activity. These will … Continue reading Hello! Today we reviewed chapters 7-8 with a reading quiz and checked our homework. The remainder of the class time was spent reading chapters 9-10. HOMEWORK: Finish reading ch. 9-10 and answer the study guide questions. PLEASE HAVE YOUR BOOK IN … Continue reading Hello! Today we completed a reading quiz on ch. 5-6 and I checked the study guide questions for a homework grade. If you were absent today, be sure to let me see your homework tomorrow. We engaged in discussion about … Continue reading Hello! Today we completed a reading quiz on chapters 3-4 and checked/ discussed the study guide questions. HOMEWORK: Complete ch. 5-6 reading and answer the study guide questions: Outsiders_Chapter Questions. DUE: Wednesday, May 9th!
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) An intraocular lens (IOL) is an implanted lens in the eye, usually replacing the existing crystalline lens because it has been clouded over by a cataract, or as a form of refractive surgery to change the eye's optical power. It usually consists of a small plastic lens with plastic side struts, called haptics, to hold the lens in place within the capsular bag inside the eye. IOLs were traditionally made of an inflexible material (PMMA), although this has largely been superseded by the use of flexible materials. Most IOLs fitted today are fixed monofocal lenses matched to distance vision. However, other types are available, such as multifocal IOLs which provide the patient with multiple-focused vision at far and reading distance, and adaptive IOLs which provide the patient with limited visual accommodation. Insertion of an intraocular lens for the treatment of cataracts is the most commonly performed eye surgical procedure. The procedure can be done under local anesthesia with the patient awake throughout the operation. The use of a flexible IOL enables the lens to be rolled for insertion into the capsule through a very small incision, thus avoiding the need for stitches, and this procedure usually takes less than 30 minutes in the hands of an experienced ophthalmologist. The recovery period is about 2–3 weeks. After surgery, patients should avoid strenuous exercise or anything else that significantly increases blood pressure. They should also visit their ophthalmologists regularly for several months so as to monitor the implants. IOL implantation carries several risks associated with eye surgeries, such as infection, loosening of the lens, lens rotation, inflammation, night time halos. Though IOLs enable many patients to have reduced dependence on glasses, most patients still rely on glasses for certain activities, such as reading. Sir Harold Ridley was the first to successfully implant an intraocular lens on November 29, 1949, at St Thomas' Hospital at London. That first intraocular lens was manufactured by the Rayner company of Brighton, East Sussex, England from Perspex CQ made by ICI. It is said that idea of implanting an intraocular lens came to him after an intern asked him why he was not replacing the lens he had removed during cataract surgery. The first lenses used were made of glass, they were heavy and were prone to shatter during Nd:YAG laser capsulotomy. Plastic materials were used later, when Harold Ridley noticed that they were inert, after seeing pilots of World War II with pieces of shattered windshields in their eyes. The intraocular lens did not find widespread acceptance in cataract surgery until the 1970s, when further developments in lens design and surgical techniques had come about. Currently, more than a million IOLs are implanted annually in the United States. Materials used for intraocular lensesEdit | Please help improve this article by expanding this section| with: reorganization, clarifications, better differentiation between the practices of different countries, and specific differences between PMMA, sillicone, and silicone acrylate. See talk page for details. Please remove this message once the section has been expanded. Advances in technology have brought about the use of silicone and acrylic, both of which are soft foldable inert materials. This allows the lens to be folded and inserted into the eye through a smaller incision. PMMA and acrylic lenses can also be used with small incisions and are a better choice in people who have a history of uveitis, have diabetic retinopathy requiring vitrectomy with replacement by silicone oil or are at high risk of retinal detachment. Acrylic is not always an ideal choice due to its added expense. New FDA-approved multifocal intraocular lens implants allow most post operative cataract patients the advantage of glass-free vision. These new multifocal lenses are not a covered expense under most insurance plans (In the United States, Medicare decided to stop covering them in May 2005) and can cost the patient upwards of $2800 per eye. Latest advances include IOLs with square-edge design, non-glare edge design and yellow dye added to the IOL. In the United States, a new category of intraocular lenses was opened with the approval by the Food and Drug Administration in 2003 of multifocal and accommodating lenses. These come at an additional cost to the recipient beyond what Medicare will pay and each has advantages and disadvantages. Multifocal IOLs - provide for simultaneous viewing of both distance vision and near vision. Some patients report glare and halos at night time with these lenses. Accommodating IOLs - allow for both distance vision and midrange near vision. These IOLs are typically not as strong for closer vision as the multifocal IOLs. To incorporate the strengths of each type of IOL, eye surgeons are increasing using a multifocal IOL in one eye to emphasize close reading vision and an accommodating IOL in the other eye for further midrange vision. This is called "mix and match." Distance vision is not compromised with this approach, while near vision is optimized. Other IOLs include: - Blue Light Filtering IOLs filter the UV and high-energy blue light present in natural and artificial light, both of which can cause vision problems. - Toric IOLs (1998) correct astigmatic vision. Phakic, aphakic and pseudophakic IOLs Edit - Phakia is the presence of the natural crystalline lens. - Aphakia is the absence of the natural crystalline lens, either from natural causes or because it has been removed. - Pseudophakia is the substitution of the natural crystalline lens with a synthetic lens. Pseudophakic IOLs are used in cataract surgery. The root of these words comes from the Greek word phakos 'lens'. Intraocular lenses for correcting refractive errors Edit Intraocular lenses have been used since 1999 for correcting larger errors in myopic (near-sighted), hyperopic (far-sighted), and astigmatic eyes. This type of IOL is also called PIOL (phakic intraocular lens), and the crystalline lens is not removed. More commonly, aphakic IOLs (that is, not PIOLs) are implanted via Clear Lens Extraction and Replacement (CLEAR) surgery. During CLEAR, the crystalline lens is extracted and an IOL replaces it in a process that is very similar to cataract surgery: both involve lens replacement, local anesthesia, both last approximately 30 minutes, and both require making a small incision in the eye for lens insertion. People recover from CLEAR surgery 1–7 days after the operation. During this time, they should avoid strenuous exercise or anything else that significantly raises blood pressure. They should also visit their ophthalmologists regularly for several months so as to monitor the IOL implants. CLEAR has a 90% success rate (risks include wound leakage, infection, inflammation, and astigmatism). CLEAR can only be performed on patients ages 40 and older. This is to ensure that eye growth, which disrupts IOL lenses, will not occur post-surgery. Once implanted, IOL lenses have three major benefits. First, they are an alternative to LASIK, a form of eye surgery that does not work for people with serious vision problems. Effective IOL implants also entirely eliminate the need for glasses or contact lenses post-surgery.[How to reference and link to summary or text] Cataract will not return, as the lens has been removed. The disadvantage is that the eye's ability to change focus (accommodate) has generally been reduced or eliminated, depending on the kind of lens implanted. Most PIOLs have not yet been approved by FDA, but many are under investigation, and some of the risks that FDA have been found so far during a three year study of the Artisan lens, produced by Ophtec USA Inc, are: - a yearly loss of 1.8% of the endothelial cells, - 0.6% risk of retinal detachment, - 0.6% risk of cataract (other studies have shown a risk of 0.5 - 1.0%), and - 0.4% risk of corneal swelling. Other risks include: - 0.03 - 0.05% eye infection risk, which in worst case can lead to blindness. This risk exists in all eye surgery procedures, and is not unique for IOLs. - remaining near or far sightedness, - rotation of the lens inside the eye within one or two days after surgery. One of the causes of the risks above is that the lens can rotate inside the eye, because the PIOL is too short, or because the sulcus has a slightly oval shape (the height is slightly smaller than the width). NuLens Ltd. is currently in patient trials with a new Accommodative Intraocular Lens (IOL) technology with the potential to provide over 10 diopters of accommodative power. With an IOL that sits on top of the collapsed capsular bag, the NuLens Accommodative IOL may be the first intraocular lens to provide real, comfortable, and lasting accommodation for near, intermediate and far distances. Types of PIOLsEdit Phakic IOLS (PIOLs) can be either spheric or toric—the latter is used for astigmatic eyes. The difference is that toric PIOLs have to be inserted in a specific angle, or the astigmatism will not be fully corrected, or it can even get worse. According to placement site in the eyes phakic IOLs can be divided to: - Angle supported PIOLs: those IOLs are placed in the anterior chamber. They are notorious for their negative impact on the corneal endothelial lining, which is vital for maintaining a healthy dry cornea. - Iris supported PIOLs: this type is gaining more and more popularity. The IOL is attached by claws to the mid peripheral iris by a technique called enclavation. It is believed to have a lesser effect on corneal endothelium. - Sulcus supported PIOLs: these IOLS are placed in the posterior chamber in front of the natural crystalline lens. They have special vaulting so as not to be in contact with the normal lens. The main complications with this type is their tendency to cause cataracts and/or pigment dispersion. One of the major disadvantages of conventional IOLs is that it is primarily focused for distance vision. Though patients who undergo a standard IOL implantation no longer experience clouding from cataracts, they are unable to accommodate, or change focus from near to far, far to near, and in distances in between. Accommodating IOLs interact with cilliary muscles and zonules, using hinges at both ends to “latch on” and move forward and backward inside the eye using the same mechanism for normal accommodation. These IOLs have a 4.5-mm square-edged optic and a long hinged plate design with polyimide loops at the end of the haptics. The hinges are made of an advanced silicone called BioSil that was thoroughly tested to make sure it was capable of unlimited flexing in the eye. There are many advantages to accommodating IOLs. For instance, light comes from and is focused on a single focal point, reducing halos, glares, and other visual aberrations. Accommodating IOLs provide excellent vision at all distances (far, intermediate, and near), projects no unwanted retinal images, and produces no loss of contrast sensitivity or central system adaptation. Accommodating IOLs have the potential to eliminate or reduce the dependence on glasses post-cataract surgery. For some, accommodating IOLs may be a better alternative to refractive lens exchange (RLE) and monovision. The FDA approved Eyeonics Inc.’s accommodating IOL, Crystalens AT-45, in November 2003. Bausch & Lomb acquired Crystalens in 2008 and introduced a newer model called Crystalens HD in 2008. Crystalensis the only FDA-approved accommodating IOL currently on the market and it is approved in the United States and Europe. Studies and Peer Reviews: In a September 2004 FDA trial involving 325 patients : - 100% could see at intermediate distances (24" to 30") without glasses; the distance for most of life's activities - 98.4% could see well enough to read the newspaper and the phone book without glasses. - Some patients did require glasses for some tasks after implantation of the crystalens - Vision was restored to 20/40 or better in 88% of patients compared to 35.9% of patients who received normal IOLs. - In 2006, a 12 month study by Cummings et al. investigated contrast sensitivity and near visual acuity in patients who had received a Crystalens AT-45 versus those who received a standard IOL. Effectiveness was measured in terms of near, intermediate, and distance visual acuities and safety was evaluated by assessing complications. The study concludedthat contrast sensitivity was not reduced compared to those receiving standard IOLs and provided good visual acuity at all distances in pseudophakic patients. There were no adverse complications reported. However, this study lacked a long-term follow up. - Pepose et al. (2007) tested the combination of a multifocal IOL in one eye and an accommodating IOL in the other eye. The group found that any combination of Crystalens in one or both eyes was better for intermediate vision. ReSTOR (multifocal IOL) is better for near vision. The Crystalens and ReSTOR combination had better mean intermediate and near vision overall. - Macsai et al. (2006) conducted a multicenter, nationwide study evaluating the visual outcomes of 112 cataract patients implanted with Crystalens IOL (n=56) versus standard monofocal IOLs (n=56). The Crystalens group demonstrated significantly better visual acuity compared to the monofocal patient group, as well as better distance and near vision 6 months post-operation. - In overall FDA clinical results on uncorrected binocular vision in 124 patients, 92 per cent had distance vision of 20/25 or better, 98 per cent had intermediate vision of 20/25 or better, and 73 per cent had near vision of 20/25 or better 11 to 15 months after surgery. In addition, 73.5 per cent either did not wear spectacles or wore them most none of the time. - However, at this time, there no long-term, well-designed clinical trials to support the accommodating technology of the Crystalens IOL. - The main concern with accommodating IOLs is that there are no long-term, large-scale studies involving its use in patients. Such clinical studies using objective measurement techniques must be done to fully support the claim that accommodating IOLs can restore accommodative vision to the presbyopic eye. - Though it is rare, potential complications include capsular bag contraction and posterior capsule opacification. - It is more difficult to implant an accommodating IOL (due to the attachment of hinges) and recovery time may be longer than with a standard IOL. - Patients should expect that his or her accommodative abilities will not be restored to perfect or near perfect function. Though vision is significantly improved, the degree of improvement will not be the same for all and some will still need glasses after surgery. - Accommodating IOLs are expensive-Insurance companies do not cover these technologically advanced IOLs because its long-term efficacy still remains to be fully elucidated. Generally, patients over 50 with cataract problems and no serious eye diseases are good candidates for the procedure. The patient must have functional ciliary muscles or zonules for haptics positioning. In addition, the pupils must dilate adequately, as the IOL will induce glares in low-light environments if the pupils dilate too large. Accommodating IOLs are beneficial not only for patients with cataracts, but also those who wish to reduce their dependency on glasses and contacts due to myopia, hyperopia and presbyopia. Post-operative care is similar to that of normal IOLs. However, patients must include ophthalmologic exercises such as puzzles and word games as a part of their daily regimen in order to tone up their ciliary muscles and attain the maximum benefit from the accommodating lenses . These exercises should be done consistently for 3–6 months and the patient's performance be monitored by their eye care professional. Other promising multifocal/accommodating IOLs currently in clinical trials include Accommodative 1CU (HumanOptics, Erlangen, Germany), Smartlens (Medennium, Irvine, CA), and dual optic accommodating lenses such as Sarfarazi (Bausch and Lomb, Rochester, NY) and Synchrony (Visiogen, Inc., Irvine, CA). See also Edit - ↑ Slade, Stephen. “Accommodating IOLs: Design, Technique, Results.” Review of Ophthalmology. 2005. 20 Mar 2009. <http://www.revophth.com/index.asp?page=1_751.htm> - ↑ “Crystalens Accommodating IOL.” USA Eyes. 2008. Council of Refractive Surgery Quality Assurance. 20 March 2009. < http://www.usaeyes.org/lasik/faq/crystalens-2.htm> - ↑ Segre, Liz. “Intraocular Lenses (IOLs): New Advances Including AcrySof ReStor, Tecnis, ReZoom, and Crystalens.” All About Vision. 2009. Access Media Group LLC. 20 Mar 2009. <http://www.allaboutvision.com/conditions/iol.htm> - ↑ United States Food and Drug Administration. Center for Devices and Radiological Health (CDRH). Crystalens Model AT-45 Accommodating IOL P030002. New Device Approval. CDRH Consumer Information. Updated Jan 21 2004. http://web.archive.org/http://fda.gov/cdrh/mda/docs/p030002.html - ↑ Cummings et al. "Clinical evaluation of the Crystalens AT-45 accommodating interocular lens Results of the U.S. Food and Drug Administration clinical trial. J Cataract Refract Surg. 2006 May; 32(5): 812-25. - ↑ Pepose JS, Qazi MA, Davies J, Doane JF, Loden JC, Sivalingham V, Mahmoud AM. Visual performance of patients with bilateral vs combination Crystalens, ReZoom, and ReSTOR intraocular lens implants. Am J Ophthamol. 2007 Sep: 144 (3) 347-357. - ↑ Macsai et al. Visual outcomes after accommodating intraocular lens implantation. J Cataract Refract Surg. 2006 Apr; 32(4): 628-33 - ↑ Glasser, Adrian. “Restoration of accommodation.” Current Opinion in Ophthalmology. 2006 Feb;17(1):12-8. - ↑ “Crystalens Accommodating IOL.” USA Eyes. 2008. Council of Refractive Surgery Quality Assurance. 20 March 2009. < http://www.usaeyes.org/lasik/faq/crystalens-2.htm> - ↑ Koch, Paul. “An Exercise Program for Crystalens Patients: How to use word search games to help crystalens patients.” Ophthalmology Management. September 2005. http://www.ophmanagement.com/article.aspx?article=86430 - Shearing, Steven, MD History of the PMMA Intraocular Lens. Ophthalmic Hyperguide. Vindico Medical Education and Allergan. Note: requires login to reach article content - link not working 1 April 9 - Keith P. Thompson (inventor). Near Vision Accommodating Intraocular Lens with Adjustable Power. (PDF) U.S. patent No. 5,607,472. URL accessed on 2007-02-04. - http://freedomophthalmic.com IOL Manufacturer in India |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Following are UN Deputy Secretary-General Jan Eliasson's remarks at the event marking World Humanitarian Day, in New York today: Today is a day to remember sacrifices and to honour courageous action. Today is also a day of celebrating our common humanity and our determination to live up to the values and principles of the UN Charter. On this World Humanitarian Day, we pay tribute to the thousands and thousands of humanitarian workers and volunteers who risk their lives to deliver life-saving aid to people in need on the front lines of crises and utter despair. Last year, 109 aid workers were killed, 110 were wounded and 68 were kidnapped. Most of them worked in five countries: Afghanistan, Somalia, South Sudan, Syria and Yemen. Today is also a day to show solidarity with the millions of people who are living in conflict and grave humanitarian need. This magnificent General Assembly Hall is the symbol of our pursuit to improve the conditions of humanity, serving as we should "We the peoples" of the United Nations, the first three words of the Charter. During the seven decades since the Charter was written, we have seen many advances for peace, development and human rights. Yet, today, the scale of human suffering is greater than at any time since the Organization was founded. A record 130 million people are now dependent on the United Nations and our partners for their protection and survival from conflict and disaster. More than 65 million people have been displaced as they flee violence or persecution. It is the highest number since the Second World War. Half of the displaced are children. What life and what future do they have? The needs around us are staggering and hard to comprehend. Yet, they tell only a fraction of the story. Behind the statistics are individuals and families whose lives have been devastated. They are all our fellow human beings with dreams and aspirations for a different life, a better future. These women, men and children face horrible situations and impossible choices every day. They are parents who must choose between buying food or bringing medicine for their children. They are children who cannot go to school because their buildings have been destroyed. They are families who must risk bombing and death at home or make a perilous escape by sea or across a desert. Humanitarian workers also must make heart-breaking choices. Think of the nurse in Yemen, who has run out of medical supplies and must decide who to treat, who to save. Or the aid workers in South Sudan, who cannot feed all of the malnourished children in their camp. I have been engaged in humanitarian work ever since my time as the first UN Emergency Relief Coordinator in the 1990s. Over the years, I have faced impossible choices and unspeakable suffering. I have seen the plight of refugees and victims of floods, earthquakes and drought, not to speak of innumerable innocent victims of warfare. And I have watched the anguish of colleagues and friends struggling to help people in dangerous and often underfunded operations. Earlier this year, 9,000 participants met in Istanbul for the first-ever World Humanitarian Summit. World leaders committed to transform the lives of people in acute need, in support of the Secretary-General's Agenda for Humanity. As we reflect on the meaning of World Humanitarian Day, we should recall and sound the alarm about the barbarity which is taking place every day in different parts of the world. Let me highlight the unspeakable tragedy of Syria. Time and time again, the United Nations has appealed for an end to the killing and destruction. Yet, those fighting persist in searching for military victory. And those watching from the outside are not preventing the violence from reaching new depths of atrocities. The Syrian people are subject to daily horrors. Barrel bombs. Terrorist acts. Chlorine gas attacks. Hospital bombings. Torture. Starvation. Siege. Desperate flight. The past two days, we have been haunted by the image of Omran Daqneesh, the young boy rescued from the rubble of a bombing in Aleppo. Let us remember that even wars have rules. Let nobody be under any illusion: today's crimes are being recorded for tomorrow's justice. In Aleppo, we urgently need a ceasefire or, at least, sufficient humanitarian pauses to reach the huge number of people in desperate need of food and medical supplies. We now pin our hope on the proposal of Special Envoy Staffan de Mistura and OCHA [Office for the Coordination of Humanitarian Affairs], on behalf of the Secretary-General, to observe a 48-hour humanitarian pause next week. I call on all parties to support this proposal in order to give the population in Aleppo the relief they so desperately need. I join the Secretary-General in urging the Security Council and all actors, with influence on the ground, to end the hostilities and to make possible political talks. Such talks could finally lead to a transition to peace for the suffering people of Syria. Let me end on a more hopeful note. This year, we have embarked on a transformative journey to a better future, staked out by last year's achievements by UN Member States. A road to a future, where no one is forced to make impossible choices, a future where no one is left behind. United Nations Member States made great progress in Sendai, Addis [Ababa], New York, Paris and Istanbul. You committed to work for a world of peace and justice, opportunity and dignity for all. To reach that brighter future, we must all work together in solidarity and with passion and compassion: Governments, Parliaments, international organizations, civil society, the private sector and the scientific community. We have an historic opportunity to make further progress at the summit on large movements of refugees and migrants in September in New York and at Habitat III in Quito. On this World Humanitarian Day, let us recognize the world as it is - and it is a troubled place. But, let us never forget to strive for the world as it should be. To reduce this gap - between the world as it is and the world as it should be - is the mission of the United Nations, and the mission for all of us here tonight. Source: United Nations
Water that we receive through the municipal water authorities contains a few contaminants in spite of their purification endeavours. Similarly in case you have gone trekking and are staying some place in which there is water resource such as lake or perhaps streams, you can expect to discover some impurities. The contaminants could be even around 30 micron and need to be removed. Drinking water filters can be purchased to remove carbonated water info all sorts of pollutants coming from water. There are actually sediments as well as particulates in all sizes and shapes present in water. Harmful bacteria and chemical resistant material cartridges are needed. There are some drinking water filter cartridges, 30 microns in dimensions that have improved capability to get rid of most sediments. The actual sources of fresh water are becoming scare. We frequently presume that water coming from streams would be pure. Similarly we frequently have to utilize lake water or even well water depending upon the spot and also the availability of municipal water. Municipal drinking water is chlorinated to ensure that bacteria and germs happen to be removed. Although that is successfully achieved to a large extent, there are some sediments and particles that may be left out. In the case of well water you may find suspended solids in greater amounts as it is actually non chlorinated. There may even possibly be some blended chemical compounds as well as heavy metals that may be dangerous for the human body. In such instances drinking water filter cartridges such as Pentek R30-Black Belts can be handy to get rid of and thoroughly clean the water before use. It is 30 micron in size and is able to noticeably clean bacterial and other pollutants from water. The Pentek R30-BB is a sediment water filter cartridge which you can use in different filter systems to get rid of suspended solids from drinking water. The 30 micron Pentek R30 is made of pleated polyester. It is pleated all around a polypropylene core to offer it additional strength. The ends happen to be immersed into a thermo setting vinyl plastisol which fuses the 3 components together. This forms a highly effective unitized end cap and gasket. The overall seam is actually sonically welded. This supplies it with enhanced filtration efficiency. The pleated design helps to ensure that the actual cartridge get the optimum dirt possessing capability. Moreover it can be easily cleaned and reused repeatedly. Additionally the durable polyester media is resistant to most chemicals as well as germs ensuring that it’s barely damaged with such pollutants. Drinking water filter cartridges involving varying micron sizes are available. The advantage of replacement cartridges is the ease they provide you in replacing one which has been used for its estimated life span. Cartridges like Pentek R30-BB which are manufactured from polyester can be successfully utilized and also reused for cleaning the non chlorinated water sources such as well water. The bacterial growth resistance stands out as the main factor in its favor. The durability is what makes this suitable for utilization in industrial as well as household applications. Drinking water filter cartridges of size 30 microns are often preferred as they are created to provide optimum productivity in drinking water purification. Along with effective removal of sediments you can be assured of potable water following filtration process. You can actually investigate and also study the benefits and drawbacks of these filter cartridges in case you are situated in locations in which there’s dependence on non chlorinated water from sources such as ponds and wells.
In Canada, more 750,000 people live with dementia. An estimated 60 per cent will go missing at least once, according to the Alzheimer Society of Toronto. “Watch the news and you’ll hear about another older adult with dementia who has gone missing,” says Noelannah Neubauer, an AGE-WELL trainee (AGE-WELL is Canada’s Technology and Aging Network), who is working to improve the situation. As part of her PhD work at the University of Alberta, Neubauer has developed comprehensive easy-to-use guidelines that offer proactive strategies to reduce the chances that someone with dementia will get lost. The guidelines were created in collaboration with provincial Alzheimer’s societies, police organizations, social workers, health-care professionals, caregivers and people living with dementia. “These are the first guidelines of their kind that simplify the vast number of strategies out there,” says Neubauer, whose research revealed there are more than 300 types of high- and low-tech strategies for persons with dementia at risk of getting lost. “You can be at risk of getting lost but still live a good life,” she stresses. “It’s making sure you implement proactive strategies that focus on a balance between safety and independence.” Her guidelines come in the form of a checklist. They focus on behaviours and circumstances—such as whether the person with dementia lives alone or frequently gets overwhelmed—to determine a person’s level of risk. Strategies, like locating-technologies, are then matched to each level of risk. There are different versions of the guidelines for people living with dementia at home, with family or in a care home. Neubauer is working with several provincial Alzheimer’s societies and other groups to disseminate the guidelines. She is passionate about helping people with dementia live safely in the community for as long as possible, while reducing the chances of them getting lost. In her own life, Neubauer has seen friends of her grandparents experience cognitive impairment. “Being so close to them, I just wanted to find a way to keep them as safe and healthy as possible.” Neubauer is a PhD student in Rehabilitation Science, working under the supervision of AGE-WELL researcher Dr. Lili Liu. Originally published by AGE-WELL.
There are some skills that are very important, and everyone needs to master it so that he or she can perform their everyday task easily and smoothly. Out of the skills in which a person needs to be good at, communication skill is one of the most important one. Whether you need to express something to your family or want to put forward your point to your employer or employees, the right communication skill is the key. Be a good listener If you think that good communication skill is not your forte, then take immediate steps to improve yourself in this skill. There can be a number of reasons as to why a person does not possess good communication skills, but, fortunately, the ways to improve oneself in this skill is very easy. But before you learn the tips to improve your communication skills you must first know the key to a good communication. If you are not a good listener then, you cannot be good in communicating your thoughts, ideas or feelings. So, if you want to excel in this skill then you have to listen well and then communicate according to what the other party just talked about. The definition of communication can be explained by a person’s ability to communicate certain information to another party correctly, accurately and effectively. So, if you do not listen to what the other people have to say then you will be just dictating your views only, and that will not result into an effective and efficient communication. Once you master the art of listening, all you have to do is follow the below mentioned helpful tips and within few days you will be able to have improved communication skills: The only way you can convey your thoughts and feelings correctly to another person is if you show them respect. When you show respect with your manners and words, then the other party automatically connects to you and makes an attempt to listen to what you are trying to convey. There are many people who happen to cross a person while he is already talking, and this should be avoided at any cost. Talking over a person disrupts the flow of communication and also the other party may feel insulted by this type of gesture. So, being respectful is an important rule of good communication skill, and you should follow it if you want improvement. When people are telling a story or discussing something, you have to make sure that you do not assume what the end will be and blur it out and finish the other party’s sentence. When you finish someone’s sentence, it is a very rude gesture. Not only this, if you happen to assume the end of the sentence wrongly then it will be you who will be embarrassed. Besides these, when you complete or assume the end of a sentence, you are basically over empowering the other party which is not the right thing to do. Paraphrase is the key If you want to have great communication skills which everyone will appreciate then, you need to master the art of paraphrasing. The trick of paraphrasing is quite easy and what it does that it makes sure that other people get interested and engaged in what you will be saying. Besides this, paraphrasing confirms the fact that you were listening which as discussed before is a very important part of good communication skills. So, if you want to engage a person, then first listen to what the other party has to say then start your explanation or view with paraphrase. The body language plays an important role developing the communication skills of a person. If you explain you your views with respectful and related gestures then, you will be able to explain your views better which at the end of the day will help you only. So, have a friendly or professional body language when you will be communicating with your friends, family or acquaintances of a workplace accordingly. Besides these, it is also important to maintain eye contact as this makes a person seem confident and thus engaging. Now you know some of the tips to follow to improve your communication skills. So, it is advisable that you search and research to find out what you are doing wrong and how you can improve yourself.
SIZE: Up to two inches. Most species are about one inch in length. COLOR: Brown to black. BEHAVIOR: Members of the order Plecoptera are known as stoneflies and are distinguished from mayflies and caddisflies by the four membranous wings being held flat over the abdomen. The antennae are long and thread-like, and adults also have two long, thin appendages extending from the tip of the abdomen. As immature nymphs, they are found in streams and rivers where they are important insects in freshwater ecosystems, serving as food for a wide variety of aquatic animals, especially fish. Adult stoneflies emerge during the winter and spring; some species as early as February. Many species fly at night and are attracted to lights. Stonefly larvae are common inhabitants along the bottoms of rivers and streams. They are important insects to the fly fisherman who will use many variations of artificial flies to mimic stonefly larvae and adults to effectively lure and catch trout. Stoneflies cannot be controlled through treatments because they breed in aquatic environments and only become pests when attracted by outdoor lights to buildings. Any emergence of stoneflies, however, should last only a few days. Where these insects are being a problem, exterior light fixtures should be turned off or have yellow “bug light” bulbs installed. Commercial buildings should use sodium vapor lamps in fixtures rather than mercury vapor lights.
|Home | About | Journals | Submit | Contact Us | Français| To test the efficacy of the Survivor Health and Resilience Education (SHARE) Program intervention—a manualized, behavioral intervention focusing on bone health behaviors among adolescent survivors of childhood cancer. Participants were 75 teens age 11 – 21 years, 1 or more years post-treatment, currently cancer-free. Teens were randomized to a group-based intervention focusing on bone health, or a wait-list control. Bone health behaviors were assessed at baseline and 1-month post-intervention. Controlling for baseline outcome measures and theoretical predictors, milk consumption frequency (p = 0.03), past month calcium supplementation (p < 0.001), days in the past month with calcium supplementation (p < 0.001), and dietary calcium intake (p = 0.04) were significantly greater at 1-month follow-up among intervention participants compared with control participants. The intervention had a significant short-term impact on self-reported bone health behaviors among adolescent survivors of childhood cancer. Research examining long-term intervention effectiveness is warranted. As a result of significant advances in the detection and treatment of pediatric cancer, the 5-year survival rate of pediatric cancer now exceeds 80% . This represents an increase from the early 1970s when only 56% of children and adolescents were predicted to live 5 years or more after diagnosis . While clinical advances have improved childhood cancer survival rates, many of these treatments lead to cancer late-effects among survivors, including risks of secondary cancers, cardiovascular disease, and musculoskeletal problems [3,4]. Survivors of childhood cancer have an increased risk for skeletal morbidity as a result of bone mineral density deficits that are caused by cancer therapies . Research has consistently shown that survivors often have suboptimal bone density, and clinical signs of osteopenia and other bone health morbidities are common . Bone mineral deficits are influenced by a number of factors, including cancer type and treatment received, and may predispose survivors of childhood cancer to early-onset osteoporosis and more severe complications from osteoporosis . Bone mineral deficits have also been linked to stunted growth and other bone-related morbidities among survivors, such as an increased risk for fractures [6-8]. Radiation therapy is one of the leading causes of bone mineral deficits among young survivors of pediatric cancer due to radiation-related endocrine system disruption [5,9]. Additionally, chemotherapeutic agents, such as corticosteroids (e.g., prednisone) and methotrexate, have been associated with reduced bone mineral density through impairment of gonadal function and inhibition of new bone growth [5,10]. Bone mineral density deficits that result from cancer therapies may be further exacerbated by the fact that few survivors of childhood cancer meet recommended criteria for daily calcium consumption and other good bone health behaviors [5,11,12]. Increasing calcium and vitamin D intake through diet and dietary supplementation are effective methods of improving bone density among children [5,13-17]. Peak bone density is typically achieved at levels of calcium intake between 1200 – 1500 mg per day in children ; current recommendations suggest that children age 9 – 18 consume 1300 mg of calcium daily for optimal bone health . Although cancer late-effects related to bone health often are not clinically manifest until later in life , encouraging bone health-promoting practices among young survivors may be an effective prevention strategy [3,5,12,19]. Prior evidence supports the effectiveness of health behavior interventions targeting survivors of childhood cancer, particularly those focusing on dietary behaviors . To our knowledge, however, evidence-based behavioral interventions targeting bone health behaviors among adolescent survivors of pediatric cancer are lacking . In order to fill this research gap, this small-scale, randomized controlled trial sought to examine the efficacy of the Survivor Health and Resilience Education (SHARE) Program intervention for immediately improving bone health behaviors among adolescent survivors of pediatric cancer. SHARE is a manualized, health education and multiple health behavior change intervention for adolescent survivors of childhood cancer focusing in part on improving their bone health behaviors, including milk consumption, calcium supplementation, and dietary calcium intake [21,22]. The Survivor Health and Resilience Education (SHARE) Program was designed as a randomized controlled trial testing the efficacy of a single, half-day, group-based health education and health behavior counseling intervention for risk-reducing, lifestyle-related outcomes among adolescent survivors of childhood cancer . The methods for the trial have been described in detail previously [21-23] and are summarized briefly below. Trial eligibility criteria included adolescents age 11 – 21 years who were treated for an oncologic malignancy, were 1 or more years post-cancer treatment, and 1 or more years cancer-free. Two pediatric cancer treatment and research centers served as the recruitment sites for the trial. The two sites are in close proximity to one another (< 5 miles), provide inpatient and outpatient services to large and diverse patient populations, and have active pediatric hematology-oncology programs that include follow-up care and late-effects programs. All study recruitment procedures were approved by an institutional review board. Tumor registries from the two sites were used to identify patients who were potentially trial-eligible. Parents of potentially-eligible patients were mailed a letter from the child’s treating oncologist that introduced the trial and were asked to respond to the mailing by contacting a research staff member. If parents responded and expressed interest in their child participating, eligibility screening was conducted by the trial coordinator. If eligible, active informed consent and assent were obtained. The trial coordinator subsequently initiated telephone calls to non-responding parents to confirm their receipt of the mailing, learn if they were interested in the trial, and obtain informed consent and assent. Among eligible patients, the consent rate was 49% . Commonly cited reasons for declining participation included time and interest. Detailed reporting of the recruitment and enrollment process is documented previously . SHARE participants completed a comprehensive baseline assessment via two successive telephone calls lasting approximately 30 – 40 minutes each. During the first call, participants completed demographic and health behavior questions and were asked to maintain a written dietary record for three days and provided with instruction on how to do so. Participants returned the completed record by postal mail and were then re-contacted via telephone for the remainder of the baseline assessment, which included a 24-hour dietary recall interview. After completing a baseline assessment, participants were randomly allocated to either the intervention or control condition. Ongoing enrollment continued until a minimum number of participants (approximately 10) were reached and could then be scheduled for intervention. The median number of days from baseline to intervention was 52. Participants completed a follow-up assessment via telephone approximately 1-month after the end of the intervention (Median = 41 days). Control participants completed follow-up assessments at an equivalent time point. All telephone interviews were administered by a trained research assistant who was masked to trial condition. Demographic characteristics assessed included age, gender, race (white or non-white), household composition (two-parent household or other), and school performance. Clinical characteristics assessed included cancer type (Leukemia or other type), time since cancer diagnosis (in years), and time since ending treatment (in years). Bone health knowledge was assessed at baseline and follow-up using six multiple choice items adapted from the U.S. Department of Health and Human Services (U.S. DHHS) National Bone Health Campaign for children and prior research [25,26]. Each item posed a multiple-choice question regarding nutrition, calcium intake, or physical activity; participants selected a response from four possible options. Bone health knowledge was operationalized using a continuous variable reflecting the proportion of items each participant answered correctly (range 0 – 100%). Participants’ baseline levels of bone health knowledge were similar to those observed among teens in prior research [26-29]. A continuous variable reflecting change in bone health knowledge from baseline to 1-month follow-up was created for multivariate analyses. Calcium consumption self-efficacy was assessed using an 11-item scale adapted from earlier research . Response options ranged from ‘not at all confident’ (1) to ‘extremely confident’ (5). Items were summed to create an overall self-efficacy score; higher values indicated greater self-efficacy (range 0 – 55; baseline M =38.9, SD = 7.9 Cronbach’s α = 0.86; 1-month follow-up M = 40.7, SD =7.3 Cronbach’s α = 0.90). A continuous variable representing change in self-efficacy from baseline to 1-month was created for multivariate analyses. Milk consumption frequency was assessed using a single item adapted from the U.S. DHHS National Bone Health Campaign . The item asked participants “How often would you say you drink milk?” The response options to the original item were adapted to a 4-point Likert-type scale that ranged from (1) ‘never’ to (4) ‘always.’ Milk consumption frequency was operationalized as a continuous variable—higher values reflected more frequent milk consumption. At baseline, self-reported milk consumption frequency was significantly associated with dietary calcium consumption assessed via 24-hour recall interview (r = 0.31, p = 0.007). Dietary calcium intake was estimated based on the U.S. Department of Agriculture (USDA) 5-Step Multiple Pass 24-hour recall method, which has been demonstrated to produce valid and reliable data when administered via telephone [31,32]. This method asks participants to list everything that he or she ate/drank for a preceding 24-hour period, and subsequently asks questions about when and where foods were eaten, details about each food, and then reviews information with participants. To facilitate accurate recall, participants were provided with a reference guide/tips sheet equating food portions to commonly encountered objects (e.g., a baseball) for use during the interview. The interview was administered by a trained research assistant. Recalled dietary data were entered into Nutritionist Pro (Axxya Systems, Stafford, TX), which is third party software that converts reported food consumption data into average daily nutritional intake information . Dietary calcium intake (in milligrams [mg]) was estimated based on dietary data and operationalized as a continuous variable for analyses. To verify accurate conversion of reported food consumption into nutritional data 16 participants’ recalled food data at baseline was entered into Nutritionist Pro by two independent dieticians and results were examined for consistency. With respect to calcium consumption, mean values were virtually identical (p = 0.93), confirming consistency. Calcium supplementation during the previous month was assessed using a single item asking “On how many of the past 30 days did you take a calcium supplement?” The item was preceded by a definition of a calcium supplement, including common brand-name examples (e.g., Caltrate®, Viactiv®). Two variables were created to operationalize calcium supplementation: a dichotomous variable indicating whether participants reported taking any calcium supplement (yes/no), and a continuous variable totaling the number of days within the past month that respondents reported taking a calcium supplement (range 0 – 30). Again, the number of days with calcium supplementation at baseline was significantly correlated with baseline dietary calcium consumption (r = 0.25, p = 0.03). SHARE was developed based on a rigorous formative research process that involved target audience members as a core component of intervention development. Details of the intervention development have been described previously . Briefly, the SHARE intervention was informed by Green and Kreuter’s PRECEDE-PROCEED model , a multi-organization partnership, capacity-building, formative research, and a pilot study of the intervention . The resulting intervention was comprised of a half-day interactive behavioral workshop that included messages and skill-building exercises addressing relevant risk-reducing and health-promoting behaviors for adolescent survivors of childhood cancer. Intervention content and outcome assessments were further informed by health behavior theory, the Health Belief Model, the Transtheoretical Model of behavior change, and Social Cognitive Theory . Key intervention objectives included increasing participants’ awareness of cancer late-effects, reducing barriers and increasing perceived benefits of health-promoting behaviors, and improving self-efficacy to lead a healthy lifestyle . The intervention had a strong emphasis on nutrition and bone health behaviors, including calcium consumption, with the goal of promoting good bone health habits and preventing bone-related morbidity. Intervention content that focused on promoting bone health included didactic presentations of bone health, demonstrations of healthy and unhealthy bone, and a discussion of meeting USDA-recommended daily calcium consumption level of 1300 mg per day [17,22]. Nutritional aspects of the intervention related to bone health focused on reading and understanding food labels, taste-testing calcium-rich foods, and role-playing of making calcium-rich food choices . Intervention participants received workshop gift packs, which included samples of Viactiv® soft calcium supplements, sunscreen, and educational pamphlets. Intervention sessions were facilitated by a masters-level registered dietician who was a member of the research team. The facilitator was trained to administer the intervention by a multi-disciplinary research team, which included experts in pediatric oncology, nutrition, and behavioral sciences. A detailed intervention manual was developed to guide implementation, including text, scripts, and intervention handouts, worksheets, and activities. The intervention format allowed the facilitator to follow the structured guide while providing flexibility to accommodate specific dynamics of each group based on participants’ age and interests. To ensure intervention fidelity, 30% of sessions were videotaped and reviewed by study team members. The control condition was a standard care wait-list condition . Control participants were offered the intervention at the conclusion of the study. Analyses were conducted using SAS 9.2 (SAS Institute, Cary, NC). Differences between the intervention and control participants based on demographic characteristics, medical information, theoretical predictors, and baseline bone health behaviors were assessed using appropriate bivariate statistics (i.e., χ2 tests, t-tests). Three linear regression models were created to examine whether differences existed between the study groups in continuous bone health behaviors at 1-month post-intervention, including milk consumption frequency, number of days taking a calcium supplement in the past 30 days, and dietary calcium intake . A logistic regression model was created to examine whether there was a significant difference between study groups in the odds of reporting any calcium supplementation in the past 30 days at follow-up. A variable indicating study group was dummy-coded (1= intervention, 0 = control) and was the focal independent variable in the models. Baseline measures of each bone health behavior were included as control variables in the respective regression models; variables indicating change in bone health knowledge and calcium self-efficacy from baseline to 1-month post-intervention were included to account for potential theoretical explanatory factors. To identify additional candidate control variables, we examined whether demographic or clinical characteristics measured at baseline were significantly associated with bone health behavior. No significant relationships were noted and these characteristics were not included. Dietary calcium intake variables were divided by 100 to ease interpretation of model parameter estimates. Participant characteristics by study condition are shown in Table 1. Participants allocated to the intervention and control conditions did not significantly differ based on demographics, clinical characteristics, theoretical predictors, or bone health behaviors assessed at baseline, indicating successful randomization. Average milk consumption frequency was significantly higher among intervention participants at 1-month post-intervention (M = 3.36, SD = 0.72) compared with control participants (M = 2.93, SD = 0.88; t63 = 2.16, p = 0.03). After adjusting for change in self-efficacy, change in bone health knowledge, and baseline milk consumption frequency (Table 2), intervention participants reported significantly more frequent milk consumption at 1-month follow-up compared with control participants (B = 0.50, 95% Confidence Interval (CI) = 0.08, 0.92, p = 0.02). Our model explained 32% of the variance in milk consumption frequency 1-month post-intervention. At 1-month follow-up, a significantly greater proportion of intervention participants (82.9%) reported taking any calcium supplements in the past 30 days compared with control participants (24.1%; χ2 1 df = 22.2, p < 0.001). After adjusting for changes in self-efficacy, bone health knowledge, and baseline calcium supplementation, the odds of reporting any calcium consumption in the past 30 days was significantly higher among intervention participants at 1-month follow-up (Odds Ratio = 24.49, 95% CI = 4.91, 143.05, p < 0.001). Our model explained 53% of the variance in current calcium supplementation (Table 2). Similarly, the mean number of days with calcium supplementation in the past month was significantly higher among intervention participants (M = 14.45, SD = 10.97) compared with control participants (M = 3.03 SD = 7.86, t62 = 4.74, p < 0.001). Regression analysis demonstrated that at 1-month follow-up intervention participants reported taking calcium supplements on significantly more days within the past month than control participants (B = 10.25, 95% CI = 4.94, 15.55, p < 0.001), after adjusting for baseline calcium supplementation and theoretical predictors. Overall, the model explained 39% of the outcome variance (Table 2). At the bivariate level, no significant difference existed between intervention (M = 1263.7 mg, SD = 736.2 mg) and control (M = 1152.1 mg, SD = 891.6 mg) participants in average dietary calcium intake at 1-month follow-up (t64 = 0.56, p =0.58). However, regression analysis revealed that, after adjusting for baseline calcium intake, and changes in knowledge and self-efficacy, intervention participants evidenced significantly greater calcium consumption at 1-month follow-up (B = 4.92, 95% CI = 0.33, 9.52, p = 0.04) compared with control participants, explaining 15% of the variance (Table 2). Despite research suggesting cancer survivors often improve dietary behaviors following diagnosis, concern remains regarding the fact that survivors of childhood cancer are at an increased risk for bone-related morbidity, and many do not meet behavioral recommendations for promoting healthy bone development [5,11]. Compounding this problem is that the evidence-base for behavioral interventions targeting bone health behaviors among adolescent survivors of childhood cancer remains scarce . Our study examined the immediate efficacy of the Survivor Health and Resilience Education (SHARE) Program, a health-promoting multiple behavior change counseling intervention for adolescent survivors of childhood cancer, on improving their bone health behaviors. To our knowledge, this is among the first studies to do so in this special population. The findings indicate that the group-based intervention was efficacious in improving self-reported milk consumption frequency, calcium supplementation, and dietary calcium intake at 1-month follow-up. The results point to potentially fruitful areas of future research. An interim evaluation of SHARE indicated intervention participants found the group-based format to be relevant, understandable, beneficial, and acceptable . Our findings add to the evidence supporting the program’s approach, suggesting it is not only well-received within the target population, but that it also represents an efficacious approach to bone health behavior improvement. Nevertheless, practical factors limited participation by some teens. Those who lived farther away from the intervention site were more difficult to engage, possibly due to travel and other logistical barriers . Indeed, recent research suggests cancer survivors may readily accept distance-based approaches to behavioral intervention, further reducing barriers to in-person engagement [12,37]. In order to expand the reach and impact of such interventions within this population, additional work examining strategies to lower barriers among teens is warranted, especially those teens that were more difficult to reach in SHARE [12,22,37]. For instance, intervention approaches applying interactive communication technologies, such as the Internet and wireless mobile technology, could improve program reach and impact . Additional intervention approaches that reduce barriers to participation, such as offering the intervention in multiple geographic locations within the community, may also improve program reach. After accounting for change in theoretical predictors of bone health and baseline dietary calcium intake, dietary calcium intake was significantly greater 1-month post-intervention among intervention participants, compared with control participants. Though we did not use an objective measure of bone health (i.e., bone density scan) to examine bone health outcomes, it is unlikely that short-term changes in bone density would be observable. However, our findings do suggest the intervention appears promising in moving participants toward that direction. Peak bone density is typically achieved at levels of daily calcium intake between 1200 – 1500 mg in children ; participants in the intervention were, on average, within this critical range at 1-month post-intervention. In addition, dietary protein and calcium have been found to interact to affect bone density: when both protein and calcium are consumed at recommended levels, a positive net impact on bone density has been observed among young adult females . Milk contains both protein and calcium, and the fact that the intervention improved calcium supplementation and increased milk consumption may lead to such effects if sustained over time . While we are only able to draw conclusions regarding immediate post-intervention behavior change, prior work suggests long-term outcomes among cancer survivors are achievable [11,37]. Among young survivors, there is evidence suggesting health behavior interventions can produce sustained outcomes up to 12-months , and that young survivors are interested in improving their diets, physical activity levels, and lifestyle-related risk factors . Whether or not a full complement of such changes is possible or durable in the long-term remains to be seen. Ensuring that cancer survivors receive optimum risk-based medical care is critical to prevent cancer late-effect morbidities among survivors [3,5]. Optimum risk-based care entails systematic planning for lifelong screening, surveillance, and prevention of cancer late-effects among survivors of childhood cancer that considers risks based on previous cancer type, cancer therapy, genetic predispositions, lifestyle behaviors, and comorbid conditions . Encouraging a healthy lifestyle among survivors of pediatric cancer is essential to optimum risk-based care and prevention [3,5]. Moreover, it is important that risk-based care addresses individual-level survivor-related factors, including knowledge, self-efficacy, and motivation necessary to engage in a healthy lifestyle and address behavioral factors contributing to cancer late-effects . These issues are central to SHARE’s intervention approach, which includes directed health behavior changes that could be integrated into optimum risk-based care for survivors of pediatric malignancies. The ideal time at which to deliver health behavior interventions among cancer survivors has not been firmly established [3,11,37]. However, age-appropriate recommendations and approaches should be integrated across the continuum of cancer care to encourage young survivors to take increasing responsibility for their health and healthcare. Future research is needed to examine how health behavior interventions such as the SHARE Program can be best integrated into long-term care to achieve this purpose. Our findings should be interpreted in light of important study limitations, including the sample size and homogeneity, the immediate follow-up period, self-report methods of assessment, and limited reach. In particular, the fact that we relied on self-reported measures of bone health behavior, some of which were developed for this research and were not previously well-established psychometrically, is an important limitation. Future work can improve on this by including more diverse, randomly-selected samples to address generalizability of findings, and utilizing multi-dimensional, multi-modal assessments to strengthen study measures. In addition, research is needed to establish the reliability and validity of self-reported behavioral assessments for milk consumption frequency and calcium supplementation used in this study. Our cursory observation associating it with 24-hour recall calcium consumption data is encouraging, but limited. Research is also needed over longer follow-up periods to examine the durability of the intervention. Objective measures of bone density (i.e., bone density scans) may be important to pursue, along with more systematic comparisons among active treatment components (i.e., education, behavioral counseling, calcium supplementation) to discern those with maximal effect. Finally, research exploring alternative intervention modalities that address barriers to participation within this population appears warranted. The limitations of this small-scale study notwithstanding, the findings suggest the multi-component, manualized SHARE Program intervention was efficacious in producing short-term improvements in milk consumption frequency, calcium supplementation, and dietary calcium intake at 1-month follow-up among pediatric cancer survivors. Health behavior and health education interventions appear useful in promoting good bone health habits among young cancer survivors, possibly preventing and controlling the onset of osteoporosis and related late-effects. This research was supported by grants from the American Cancer Society, Lance Armstrong Foundation, and the National Cancer Institute (CA091831) to Kenneth P. Tercyak, PhD. The project was also supported in part by Award Number P30CA051008 from the National Cancer Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health. Portions of this research were previously presented at the International Conference on the Long-Term Complications of Treatment of Children and Adolescents for Cancer, Niagara-on-the-Lake, ON, Canada. (2004), the National Conference on Child Health Psychology, Charleston, SC (2004), and the Eastern Society for Pediatric Research, Philadelphia, PA (2009). Conflict of Interest Statement: The authors have no conflicts of interest to disclose.
School Safety Assessment: VISUS Methodology Each issue is analyzed using the process of an expert reasoning, splitting the assessment in two main phases: the characterization and the evaluation. Ensuring the safety of people is one of the main concerns of public administrators in hazard-prone territories, particularly with reference to strategic and relevant major public buildings, such as schools. This requires the definition of a rational and effective strategy for risk reduction and climate change adaptation based on the level of risk, points of weakness, countermeasures and costs. Administrators and policy-makers must make decisions using a finite budget for a variety of safety interventions in schools of an entire district. Therefore, there is an imperative need for a quick but reliable assessment methodology, which allows them, on the one hand, to characterize the initial situation, and on the other hand, to support them with concrete information for decision making. Moreover, when a priority of intervention is necessary, a multilevel approach is also useful for facilitating the decision process to upgrade the safety level. In close collaboration with UNESCO, SPRINT-Lab researchers at the University of Udine in Italy developed a specific technical-triage methodology named VISUS. This safety assessment methodology facilitates the decision-making process in the definition of rational and effective safety-upgrading strategies, and allows decision makers to take science based decisions on where and how they may invest their available resources for strengthening the safety of schools, their students and teaching staff in an efficient and economical manner. VISUS assesses schools in a holistic and multi-hazard manner that considers five issues: site conditions, structural performance, local structural criticalities, non-structural components and functional aspects. Each issue is analyzed using the process of an expert reasoning, splitting the assessment in two main phases: the characterization and the evaluation. As a result, simple graphical indicators summarize the evaluation pointing out the main weaknesses and the needs of intervention. The collection of data during the characterization phase is done through a mobile application. The information generated will support the sustainability of the desired impacts as, i) Ministries of Education and Finance will be able to define and prioritize the budgets needed for future investments; and, ii) international and regional development banks can use the outcomes of the assessments to guide the design of future grants and loans for rehabilitation, reinforcement and retrofitting of school buildings, and in the construction of new safe schools. Technical triage assessments and expert judgment pre-codification processes are the two main elements on which VISUS methodology is based. Different levels of assessment can be identified aiming to answer different requirements. Low assessment levels are usually implemented through a collection of data (desk analysis, questionnaires, forms, check-lists, etc.). These approaches allow a quick ranking of buildings through indices. Nevertheless, such approaches cannot be detailed enough to properly answer all of the administrator's concerns; and in most of the cases the quality of input data is not accurate. On the other hand, deeper analyses can answer the majority of administrator's concerns, with in-depth/specific assessments, detailed design and cost quantification. However, these inspections are very costly, timely consuming and they rely on the available expertise within the country - which sometimes is inexistent - limiting the number of facilities that could be inspected. Due to the specificities of the VISUS methodology, the methodology has been recently adopted by UNESCO and has been positively tested in prototype projects in El Salvador, Laos and Indonesia.
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
Choosing a major is one of key decisions you'll make in college. Once you determine that, you still must decide what type of degree to obtain. If chemistry is your major, you can opt for a Bachelor of Arts, a B.A., or a Bachelor of Science, a B.S. Each type of degree has advantages and disadvantages, so knowing the differences will help you make the right choice for your future career. Bachelor of Arts A Bachelor of Arts is considered a liberal arts degree, generally with one focus of study, such as English, chemistry, psychology or sociology. Classes for a B.A. encompass a spectrum of chemistry topics without delving into the topic as far as a B.S. program would require. A Bachelor of Arts degree requires fewer courses in the chemistry major and allows students to take more electives in other areas, such as English, foreign language and art. A B.A. in chemistry would allow you to pursue advanced education in certain medical and science fields. According to the University of Iowa, you could use a B.A. in chemistry as a prelude to dentistry or medical studies. You might also pursue such a degree if you're planning to work in pharmacology or optometry or if you want to teach high school chemistry. A B.A. in chemistry would include the courses necessary to qualify for admission to graduate school as well, according to Eastern Kentucky University. Bachelor of Science A Bachelor of Science is a more rigorous course of study that focuses mainly on math and science concepts. This type of degree typically requires a certain number of hours of research study; the exact number depends on the college you attend. A B.S. in chemistry requires courses in physics and calculus. If you pursue a B.S. degree, you can't take as many elective courses in areas other than chemistry, science and math. If you're planning to go into the biochemistry or organic chemistry fields, a Bachelor of Science degree is the better option. This type of degree might also be necessary if you want to become a doctor, veterinarian or dentist. Research-driven careers in chemistry or medicine also require a B.S. in chemistry. According to the University of Iowa, a B.S. in chemistry is required for careers in certain types of business and industry. A B.S. degree also prepares you for graduate school.
From Wikipedia, the free encyclopedia: The Cathedral of Santa Maria of Palma, more commonly referred to as La Seu (a title also used by many other churches), is a Gothic Roman Catholic cathedral located in Palma, Majorca, Spain, built on the site of a pre-existing Arab mosque. It is 121 metres long, 55 metres wide and its nave is 44 metres tall. Designed in the Catalan Gothic style but with Northern European influences, it was begun by King James I of Aragon in 1229 but finished only in 1601. It sits within the old city of Palma atop the former citadel of the Roman city, between the Royal Palace of La Almudaina and the episcopal palace. It also overlooks the Parc de la Mar and the Mediterranean Sea. Europe is generally agreed to be the birthplace of western culture, including such legendary innovations as the democratic nation-state, football and tomato sauce.The word Europe comes from the Greek goddess Europa, who was kidnapped by Zeus and plunked down on the island of Crete. Europa gradually changed from referring to mainland Greece until it extended finally to include Norway and Russia.Don't be confused that Europe is called a continent without looking like an island, the way the other continents do. It's okay. The Ural mountains have steadily been there to divide Europe from Asia for the last 250 million years. Russia technically inhabits "Eurasia".Europe is presently uniting into one political and economic zone with a common currency called the Euro. The European Union originated in 1993 and is now composed of 27 member states. Its headquarters is in Brussels, Belgium.Do not confuse the EU with the Council of Europe, which has 47 member states and dates to 1949. These two bodies share the same flag, national anthem, and mission of integrating Europe. The headquarters of the Council are located in Strasbourg, France, and it is most famous for its European Court of Human Rights. In spite of these two bodies, there is still no single Constitution or set of laws applying to all the countries of Europe. Debate rages over the role of the EU in regards to national sovereignty. As of January 2009, the Lisbon Treaty is the closest thing to a European Constitution, yet it has not been approved by all the EU states. Text by Steve Smith.
From the 'flying birdcages' that the first reconnaissance aircraft were described as, to today's modern drones (which, as it happens, also have their origins in World War 1), military aviation has come a long way in the last century. Here, aviation historian Chris McNab looks back at the early days of aircraft training and explains the evolution of British air-combat tactics and training. To learn more on that subject, as well general doctrine, training and corps flying skills, aircraft assembly, care and repair and air-to-surface operations, read his book, ‘The World War I Aviator’s Pocket Manual’, which features official documents from the period. Use the code AVPFN18 to get a 25% discount. Article by Chris McNab It takes a considerable act of imagination to place yourself mentally in the cockpit of a World War One combat aircraft. First, remind yourself that when war broke out in 1914, powered, sustained and controlled flight was a mere 11-years-old, its birth marked by the 12-second hop of Orville and Wilbur Wright’s Flyer near Kitty Hawk, North Carolina, on 17 December 1903. The intervening period between 1903 and 1914 saw admittedly significant steps forward in aviation technology and understanding, but flight itself was still in the birth stages of its evolution. Aircraft of this time were slow, vulnerable, rattling creatures, made from wire, canvas and wood and powered by sputtering and unpredictable engines. The understanding of aeronautics was exploratory at best. Add generally poor levels of training amongst early aircrew, and it was a recipe for disaster. A study of 153 Royal Flying Corps (RFC) pilots killed between August 1914 and December 1915 found that, while 58 per cent died from combat-related wounds, the remainder – 42 per cent – were killed by mechanical failure or simple human error. Part of the problem for RFC aviators was that just coping with the physical conditions of open-cockpit flight took extraordinary resilience. In winter, sub-zero temperatures at ground level would drop to polar lows at altitudes as high as 18,000ft, to which was added a 100mph slipstream windchill factor. Although electrically-heated body suits were introduced in very limited numbers later in the war, most pilots simply had to shiver and survive in layers of fur clothing, whale oil smeared on exposed skin. Even in the most clement weather, life in the skies was merciless. Flying through clouds, let alone rain, soaked you through and clouded goggles. At higher altitudes, hypoxia was an ever-present risk; oxygen bottles were sometimes issued, although many pilots and other aircrew did not rely on such assistance, regarding an artificial oxygen supply with an air of stoic disdain. Furthermore, if anything went wrong there was no option to bail out – parachutes were not issued as standard. A Sopwith Camel at the Imperial War Museum (image: Les Chatfield) Added to the dangers inherent in the aircraft, and the adversity of flying, were the twists, turns and violence of combat itself. In the early months of the war, the primary purpose of military aircraft was reconnaissance, the aircrew carrying a handgun or rifle to make opportunistic and largely ineffective potshots at enemy aircraft. Yet in 1915, pure combat aircraft – fighters and bombers – had emerged in earnest, boosted especially by the invention of fixed machine guns firing directly through the propeller arc, first via Roland Garros’ propeller bullet-deflector plates, then in the invention of the ‘interrupter gear’ (variously attributed) that synchronised the individual moments of firing with the turn of the propeller blades. What this invention did was make aerial fighter combat into a true direct-fire war, expressed on the British side through aircraft such as the Royal Aircraft Factory S.E.5, the Sopwith Camel and the Sopwith Pup. To gain superiority over the enemy in this new battle, two factors were critical: 1) Aircraft performance, particularly speed, manoeuvrability and climb rate; 2) Tactical manoeuvres designed to bring the aircraft into optimal firing position, ideally directly on the tail of the enemy, or to shake off an enemy attempting to do the same. The former came through progressive improvements in airframe and engine design, but efficiency in the latter was crucially dependent upon two human factors – training and experience. Training in the RFC during the First World War was a somewhat haphazard affair. A Central Flying School (CFS) had been established on 12 May 1912, instructing both Navy and Army pilots, but it was quickly overwhelmed by the high rates of demand imposed upon it by war conditions. Thus, numerous training units were improvised, often based around civilian flying schools, but the variability of training was wildly spread. From 1916, the RFC did begin to impose regulations over training standards, but, even so, pilot recruits regularly entered combat with fewer than the stipulated 15 hours of solo flying under their belts. The result was a truly appalling loss rate amongst pilots. During the infamous ‘Fokker Scourge’ of 1916, for example, pilot life expectancy dropped to just 17.5 hours of flying time. Although combat flying in World War One would remain a violent and perilous business to its end, the RFC recognised the need for greater professionalism to be built into its training and the distribution of combat knowledge. Certain fighter aces played an individual role through compiling and transferring tactical good practice. An example is found in the rules written down by Edward Corringham ‘Mick’ Mannock, an aggressive pilot and inspirational leader who downed at least 61 enemy aircraft during his combat flying career. Mannock truly cared for the men who fell under his command in 74 Sqn and later 85 Sqn, a humanity demonstrated by the way in which he distributed his knowledge through lectures and leaflets. But alongside the shared experience of talented pilots, the RFC also needed a better organisational training structure if air combat training was to achieve a degree of commonality and rigour. In July 1916, it formed a specific Training Brigade, which, in August 1917, was expanded into a full Training Division, consisting of three training brigades: Northern, Eastern and Southern, all staffed by experienced flyers and instructors. By the time of the RAF’s formation in April 1918, the RFC had developed more than 100 training squadrons and 30 specialist flight schools. One particularly significant advance in pilot training was brought about by Lieutenant-Colonel Robert Smith-Barry, a battle-tested commander of 60 Sqn. Appalled by the losses he witnessed amongst air crew, Smith-Barry set about pioneering a new, professional method of flying instruction, centred around the use of dual-control Avro 504 biplanes at 1 (Reserve) Sqn at Gosport. The instructor sat behind the pilot, communicating instructions through a ‘Gosport Tube’ (a voice tube that terminated in earpieces for the pilot), hence the method of training later became known as the ‘Gosport System’. Smith-Barry focused not just on giving the recruits basic flying skills, but on getting them to push the aircraft to their limits of manoeuvrability and power, wrenching the machines through tactical manoeuvres that could either save their lives or bring them in line for a kill over in war-torn Europe. It was hair-raising flying, about which Smith-Barry remarked: “If the pupil considers this dangerous, let him find some other employment as, whatever risks I ask him to run here, he will have to run a hundred times when he gets to France.” But it produced a far-higher calibre of pilot than other, more ad hoc approaches, and the RFC authorities recognised that fact. In August 1917, the 1 (Reserve) Sqn became the School of Special Flying, dedicated to producing expert Gosport System instructors, who then went out to the wider world to train up the next generation of pilots. As already implied, even higher standards of training could never make combat flying in World War One appreciably less dangerous, and, compared to present-day expectations, the casualties remained astonishing – 37,970 British aircraft were lost during the conflict. Yet, as we can see, when the RAF was born in 1918, straight away it stood on the shoulders of giants. The professionalism and tactical expertise that became synonymous with the RAF was born, experimentally and violently, in the hands of thousands of aircrew who, inch by painful inch, steadily learned the hard lessons of aerial combat and progressively attempted to distribute those lessons professionally and systematically. The rewards for doing so, paid in so much blood, were seen in the RAF’s defeat of the Luftwaffe in 1940, and in all subsequent operations to this day. To learn more, pick up a copy of Chris McNab’s book, ‘The World War I Aviator’s Pocket Manual’. Readers can get 25% off when they order at www.casematepublishers.co.uk. To apply discount simply enter voucher code AVPFN18 to your basket before proceeding to checkout. * (Cover image: Shutterstock / Paul Fleet). * This is an edited version of an article first published in 2018.
By Lestey Gist, The Gist of Freedom Click and Listen to Kimberly Simmons, a descendant of A Militant Black Abolitionist, Lewis Leary! Lewis helped rescue John Price and rode with John Brown in the raid on Harpers Ferry!WWW.BlackHistoryBLOG.comMark your calendar for Thursday at 8pm we’ll interview Stephanie Gilbert, respectively. On iTunes …Descendant, John Brown’s Black Militant Abolitionist, Leary! – Jan 07,2013 –http://itun.es/i6JJ4Bw Click and Listen to Kimberly Simmons- Descendant of The Militant Abolitionist Lewis Leary, (John Brown Raider and The “John Price Rescue”). Kimberly will share stories of her family legacy in social activism, from Lewis Leary and The Underground Railroad Movement – Langston Hughes and The Harlem Renaissance. Stephanie will share her great-great grandfathers’ narrative which gives his account of Shadrach’s Rescue and his involvement. Morris was one of the leaders of the Boston Vigilance Society, (A cross between NAACP & BLack panthers) formed in the wake of the Fugitive Slave Act of 1850 to fight slavery and to protect those who had escaped bondage and settled in the North. In one of the preludes to the Civil War, Morris helped free a successful freedom seeker, SHADRACH MINKINS after Minkins had been captured by bounty hunters. Having failed in a petition for Habeas Corpus, Morris, Lewis Hayden (left) and others devised a plan to break into the jail and carry Minkins to freedom. Under orders from President Millard Fillmore, Morris was tried for violating the New Fugitive Slave Act of 1850 and faced a potential death sentence. The new revised law deputized and paid any person who arrested a free black person they suspected of being a “Fugitive Slave”. Morris was represented by Richard Henry Dana author of Two Years Before the Mast. He was acquitted after a trial, in part based on the testimony of an alibi witness who swore Morris was not involved in the jail break. The witness was the Chief Justice of the Supreme Judicial Court, Lemuel Shaw, who had ruled against him in the Roberts case. Morris was a giant in the Massachusetts legal profession before and after the Civil War, and he played a central role in several key legal developments in America during his career. In 1849, Morris joined with Senator Charles Sumner (Beaten with Cane on Floor of Congress) in challenging legal segregation in Boston’s elementary schools. Black children were required to attend the Belknap School just off what is now Joy Street, even if other schools were closer to their homes. The case was tried on stipulated facts, which included a stipulation that the elementary schools were “separate” for “colored” and white children but otherwise the same in faculty and facilities. From that, the United States Supreme Court later derived the concept of “separate but equal,” a phrase that would haunt race relations in the country for over a century. In the case, Roberts v. City of Boston, Morris and Sumner argued that segregating children by race was inherently discriminatory and damaging to the black children. They argued that requiring black children to attend an all- black school in essence violated the statute that required cities and towns to provide education for all children, even though the statute did not specify how each school system was to be arranged. Morris was not deterred. He joined with William Cooper Nell, Lewis Hayden, William Lloyd Garrison and other giants of the abolition movement to form a grass roots movement to pressure the legislature to change the law. On April 28, 1855, Governor Henry Gardner signed a bill into law prohibiting racial segregation in public schools. Gardner was a member of the “Know-Nothing” party. Massachusetts became the first state in the Union to adopt such a law. http://www.massabota.org/
Weeping tile is a porous pipe used for underground drainage. Weeping Tile is typically of two varieties; 1) Corrugated plastic pipe with small slits cut lengthwise into it 2) PVC pipe with drainage holes Weeping Tile is installed beside a foundation footing and surrounded by aggregate larger than the slits. The aggregate prevents excessive soil from entering through the slits into the weeping tile. With this arrangement, water in the surrounding soil flows through the aggregate and into the weeping tile. The weeping tile then drains into a storm sewer connection through a back water valve, a sump liner, a gravity drain or a dry well. The weeping tile is to be installed so that the top of the pipe is lower than the bottom of the interior concrete floor. The weeping tile acts as a relief valve for water pressure to limit the possibility of water penetrating the basement floor or walls. Without weeping tile installed around a foundation for drainage, water pressure will build to a point that it will have to release into the basement. Lenbeth Weeping Tile (Calgary) has extensive experience in both Residential and Commercial Construction. We are experts in drainage and can provide advice to help you plan and complete your drainage systems!
History and Benefits of Diets U.S. Senate Dietary Guidelines About the history and benefits of the U.S. Senate dietary guidelines. A BANQUET OF FAMOUS DIETS U.S. Senate Dietary Guidelines The Head Man: As chairman of the Senate Select Committee on Nutrition and Human Needs, Sen. George McGovern of South Dakota, former Democratic presidential hopeful and a leading liberal in Congress, saw his role as akin to that of the U.S. surgeon general who condemned smoking as a threat to health. Six out of the leading 10 causes of death in this country, McGovern pointed out, are linked to diet. But when the McGovern committee report came out in January, 1977, after nine years in the making, its conclusions--that cholesterol-rich foods may be as dangerous to health as cigarettes--aroused such a controversy that an amended report was issued 11 months later. The committee was originally conceived as a bridge between health and welfare interests on the one hand and food and farm interests on the other, but its initial recommendations on national nutrition--the first ever by any branch of the U.S. government--seemed to promote the former at the expense of the latter. The revised report made concessions to the meat, dairy, salt, and sugar industries and to the American Medical Association. Overview: Dietary Goals for the U.S. (1977) was intended as a set of guidelines to nutrition rather than as a particular formula for losing weight. As far as dieting is concerned, the committee concluded that calories do count, that caloric intake must be reduced below maintenance needs in order to lose weight, and that no diet yet invented offers a surefire solution to obesity. On the contrary, testimony before the committee indicated that only 10% to 20% of individuals on diet programs actually solve their weight problems. The rest bounce up and down in what nutritionist Jean Mayer calls "the rhythm method of girth control." In general outline, the McGovern committee recommended increased consumption of complex carbohydrates and naturally occurring sugars, from 28% of caloric intake in the average U.S. diet to 48%; reduction of refined and processed sugar intake by about half, to 10% of total calories; and reduced consumption of fat, from 40% to 30% of intake; with protein making up the final 12%. (The committee's original recommendation to eat less meat was modified in the final report to decreasing consumption of animal and saturated fats.) More specifically, the committee advised limiting salt consumption to 5 grams daily (up from 3 grams in the original report) and cholesterol to 300 milligrams daily. Senator McGovern also expressed concern over the rapidly growing use of soft drinks, which during the 1970s replaced milk as the second most frequently consumed beverage (after coffee). Pro: On a national level, lower fat and protein consumption and greater reliance on complex carbohydrates would promote health, reduce medical expenditures, and conserve some of the energy involved in food processing. At the level of the family, cutting back on expensive meat and processed products in favor of fresh vegetables would result in considerable savings. Con: By the same token, these changes would have a significant negative impact on the meat-and food-processing industries. As far as the advised nutritional balance affects the individual dieter, the editors of Consumer Guide point out that in the recommended 1,200 calories you would not be getting enough protein to meet basic needs and should therefore take some of your carbohydrates in the form of high-protein legumes. |You Are Here: Trivia-Library Home » History and Benefits of Famous Diets » History and Benefits of Diets U.S. Senate Dietary Guidelines| |DISCLAIMER: PLEASE READ - By printing, downloading, or using you agree to our full terms. Review the full terms at the following URL: /disclaimer.htm|
In the past organic farm production was often considered as being only for radicals or hippies. Now it is seen as a viable economic move – with benefits to the farm soil, to the environment, and to the purchasers of the products. An organic approach can contribute toward making a farm more financially viable in several ways. Demand for organic produce has boomed over recent years and supermarkets from Australia to England now devote significant shelf space to organic produce, and organic certification schemes have emerged and flourished. Each lesson culminates in an assignment which is submitted to the school, marked by the school's tutors and returned to you with any relevant suggestions, comments, and if necessary, extra reading. There are many definitions of organic farming. A commonly accepted definition is “farming without the addition of artificial chemicals”. An artificial chemical is one that has been manufactured or processed chemically; for example super phosphate (one of the world’s most important fertilisers). All kinds of agricultural products are produced organically – vegetables, fruit, grains, meat, dairy, eggs, and fibres such as cotton and wool. Many processed foods are also produced organically (e.g. bread). Most farms still operate as a monoculture (or series of monocultures); but problems associated with such practices are increasingly having an impact on the financial long-term viability of those farms. Some poly-culture options which have benefited farms to date include: Growing trees in paddocks provide shade for animals, and then eventually the trees can be harvested and sold for timber, woodchip or firewood. 2. Animals grazing in orchards Sheep, free-range poultry or other animals can be grazed below fruit or nut trees. Alternative farmers increase the efficiency of their land by diversifying harvestable product. One example is to have a marketable plant species (eg. pecan nut trees) and allow fowl to free-range beneath. Both pecans and fowls are harvestable. It provides a mutually beneficial situation. The trees provide shelter for the birds and the birds return the favour by fertilising the trees, eating suitable insects/pests. To ensure the birds do not eat the nuts, they can be kept locked in pens till after harvest. Almost any animal can be used in this type of production system. Cows, sheep, deer, etc - are all possibilities. The farmer only needs to ensure the animals will not eat or destroy the trees/plants, and that the plants are not toxic to the animals. 3. Inter-row cropping Inter-row cropping involves establishing the principle crop in rows then to plant another crop between them. A slow crop such as corn may be planted with lettuce between the rows. Compatibility of the two crop species is important. Do they need the same amount of watering and fertilising? How will light affect the two crops? Is one root system more dominant than the other? These questions will need to be answered for each combination. Long-term fruit tree crops may be planted with vegetables between the rows. In this case, consider the spread of the tree roots. Most tree fruiting plants do not appreciate root competition. Spacing used in most fruit tree orchards adequately allows for a row of vegetables to be planted. Biodynamic farming and gardening is a natural practice developed from a series of lectures given by Rudolf Steiner in 1924. It has many things in common with other forms of natural growing, but it also has a number of characteristics which are unique. It views the farm or garden as a "total" organism and attempts to develop a sustainable system, where all of the components of the living system have a respected and proper place. There is a limited amount of scientific evidence available which relates to biodynamics. Some of what is available suggests biodynamic methods do in fact work. It will, however, take a great deal more research for mainstream farmers to become convinced widely of the effectiveness of these techniques, or in fact for the relative effectiveness of different biodynamic techniques to be properly identified. Principles of biodynamics: - Biodynamics involves a different way of looking at growing plants and animals. - Plant and animal production interrelate. Manure from animals feeds plants. Plant growth feeds the animals. - Biodynamics considers the underlying cause of problems and attempts to deal with those causes rather than dealing with superficial ways of treating problems. Instead of seeing poor growth in leaves and adding nutrients; biodynamics considers what is causing the poor growth - perhaps soil degradation, wrong plant varieties....or whatever? It then deals with the bigger question. - Produce is a better quality when it is "in touch" with all aspects of a natural ecosystem. Produce which is produced artificially (eg. battery hens or hydroponic lettuces) will lack this contact with "all parts of nature", and as such the harvest may lack flavour, nutrients, etc., and not be healthy food. - Economic viability and marketing considerations affect what is grown. - Available human skills, manpower and other resources affect what is chosen to be grown. - Conservation and environmental awareness are very important. - Soil quality is maintained by paying attention to soil life and fertility. - Lime, rock dusts and other slow acting soil conditioners may be used occasionally. - Maintaining a botanical diversity leads to reduced problems. - Rotating crops is important. - Farm manures should be carefully handled and stored. - Biodynamics believes there is an interaction between crop nutrients, water, energy (light, temperature), and special biodynamic preparations (ie. sprays) which result in Bio-dynamically produced food having particularly unique characteristics. - Plant selection is given particular importance. Generally biodynamic growers emphasise the use of seed which has been chosen because it is well adapted to the site and method of growing being used. - Moon planting is often considered important. Many biodynamic growers believe better results can be achieved with both animals and plants if consideration is given to lunar cycles. They believe planting, for example, when the moon is in a particular phase can result in a better crop. Permaculture is a system of agriculture based on perennial, or self perpetuating, plant and animal species which are useful to man. In a broader context, permaculture is a philosophy which encompasses the establishment of environments which are highly productive and stable, and which provide food, shelter, energy etc., as well as supportive social and economic infrastructures. In comparison to modern farming techniques practised in Western civilisations, the key elements of permaculture are low energy and high diversity inputs. The design of the landscape, whether on a suburban block or a large farm, is based on these elements. There are nine key guiding principles of permaculture design: 1. Relative location Place components of a design in a position which achieves a desired relationship between components. Everything is connected to everything else. 2. Multiple functions The designer will determine a number of different functions for a design (eg. produce fruit, provide shelter). When a design is prepared, each function is then considered one by one. In order to make the design achieve a "single" function, the designer must: - deal with several different components which influence that function - make different and distinct decisions about each of these components Every function is supported by many elements. 3. Multiple elements In permaculture, the term "element" is used to refer to the components of a design such as plants, earth, water, buildings. A design must include many elements in the design to make sure functions are achieved. Every element should serve many functions. 4. Elevational planning The design must be on a 3-dimensional basis, giving consideration to length, width and height of all elements (ie. components). Particular emphasis is given to energy impacts. 5. Biological resources -Priority is to use renewable biological resources (eg. wood for fuel) rather than non renewable resources (eg. fossil fuels). -Design so that biological resources are reproduced within the system. 6. Energy recycling - Energy use should be minimised. - Waste energy should be harvested (eg. often pollution can yield useable energy). - Design the system to optimise collection of energy by plants and animals. (eg. Using plants that catch light, produce bulk vegetation and then rot to provide a store of nutrients). This way energy is caught, stored and reused in the system. 7. Natural succession - Design in a way that plant and animal life is always rich by ensuring new organisms emerge as old ones die. 8. Maximise edges The edge of two different areas in a system has more things influencing it than other parts of the system. This is because there is greater diversity there with components of two different areas having an effect. As such design of an edge is more critical, and potential for an edge can be greater. Design should be a poly-culture (i.e. a system where a greater number of species are growing together). This ensures greater biological stability. Design can be seen to have two elements: aesthetics and function. In other words, design (of any kind) can be influenced to varying degrees by the aesthetics or appearance of what you are trying to achieve; and/or by the function or purpose to be served by what you are trying to design. Permaculture concentrates on function and gives low priority to conventional ideas of aesthetics. As such, a permaculture system does not need to look 'nice', but it does need to serve its intended purpose. Reference: Permaculture Design Course Handbook by Mollison et al. Crop rotation consists of growing different crops in succession in the same field, as opposed to continually growing the same crop. Growing the same crop year after year guarantees pests of a food supply – and so pest populations increase. It can also lead to depletion of certain soil nutrients. Growing different crops interrupts pest life cycles and keeps their populations in check. Crop rotation principles can be applied to both broad acre and row crops alike. The principles may even be applied to pastures. In the United States, for example, European corn borers are a significant pest because most corn is grown in continuous cultivation or in two-year rotations with soybeans. If the corn was rotated on a four or five year cycle, it is unlikely that corn borers would be major pests. This kind of system would control not only corn borers, but many other corn pests as well. In crop rotation cycles, farmers can also sow crops that like legumes that actually enrich the soil with nutrients, thereby reducing the need for chemical fertilisers. For example, many corn farmers alternate growing corn with soybeans, because soybeans fix nitrogen into the soil. Thus, subsequent corn crops require less nitrogen fertiliser to be added. Use our free service to get in contact with our school today and chat with an expert who knows and understands this discipline. Click here to contact us
Children Activity 1: Bubbles By Kathy Fuller Guisewite Kathy Guisewite is a licensed minister in the Church of the Brethren and is a trained Spiritual Director. Most recently, Kathy was approved as a Pastoral Care Specialist through the American Association of Pastoral Counselors. Kathy is also one who loves exploring life and spiritual matters through various artistic ventures. To help children equate bringing joy through this activity with bring joy to others through giving to One Great Hour of Sharing. Crayons and Markers Scissors or cut paper Other art supplies to make the cards nice, i.e. glitter, stickers, etc. Encourage the children to color and decorate cards or pictures. They could add bubbles, bright colors, abstract designs and lines. They should write Let’s be like bubbles… floating into the lives of others bringing delight and joy. On each picture, Use folded paper so you can make each of the pictures into individual cards. It would be great to have one individual card for each member present in worship. During worship, someone should announce that the children would like to offer a time of joy after the service. Weather permitting, the children could gather outside to blow bubbles and pass out the cards to church members. As your church is comfortable, children could also share this activity in the fellowship hall or in the foyer. Blow some bubbles in class. Witness the delight! Talk with the children in Sunday School about bubbles, and how they are a gift that bring delight to others. Ask the children to share why they think bubbles are so much fun. Perhaps you will get answers like: they are pretty, you have to enjoy them while they last because they can disappear quickly, they are fun to make and try to blow them as big as you can, they help us to look up and to keep looking for more. Let the children know that One Great Hour of Sharing is about bringing joy to others and helping them in many other important ways. Let them know that today, they will have the chance to bring some joy to the people of this church. Children can decide/take turns blowing the bubbles and sharing the cards they make. Encourage the children to watch the faces of church members as they see the bubbles. This will help them gain deeper understandings as to how sharing blesses and brings joy to others. Children Activity 2: The Gifts of Sharing by Pam Auble Pam Auble is a graduate of Miami University in Oxford, Ohio and earned a Masters of Christian Education from Garrett Evangelical Theological Seminary. She has served as a Diaconal Minister in the United Methodist Church and as a Licensed Pastor in the Christian Church, Disciples of Christ. Children may view themselves mainly as recipients of gifts, blessings, help and direction, especially in our churches. They see adults as the givers, the leaders in worship, in Sunday school, in choir, pretty much like the rest of their world. As their faith leaders we have a unique opportunity to redirect this view of themselves. They have immeasurable potential to share their natural tendencies towards love and acceptance. They can show God’s love to others. They can be the givers. Help the children in your Sunday school classroom experience the joy of sharing themselves. Plan a situation in which they can offer themselves in service to the congregation, preparing them beforehand and following up with discussion that highlights the results. Here are a few ways your children can share: Make arrangements with the ushers to allow your children to distribute worship bulletins. Before they start, ask the children to look into the faces of each person to whom they hand the bulletin. Ask them to share their smile and share a simple greeting too. They can simply say “hi” or “good morning”. Afterwards, talk with the children to see how the congregation reacted. Did the children make people feel happy? Did they smile? What did the children share? (their smiles, their greetings, their time, their love) Another activity, in addition to or instead of the above idea: During the Sunday school time invite the children to make a small card or picture they can give to another person somewhere in the building that same morning. Tell the children their card ought to be a reminder of God’s love, a happy card. When the cards are finished, prepare the children to deliver their gift. To whom, in the building, would they like to give their card? Ask them to watch the face of the gift recipient. Remind them to offer their card with a smile and a simple comment, such as “Hope this reminds you that God loves you”, or, “I hope this gives you a big reason to smile today”. After the cards have all been delivered, gather the children together and ask them if they made anyone happy by their gift. Ask them how sharing made them feel. Regardless of how the children share themselves be sure they know that they are capable of sharing themselves. Be sure they realize how their actions can make others happy. And let them know that they are sharing God’s love when they are kind to others. Together your class can read II Corinthians 9: 7,8. Did they share cheerfully? What does it mean to have enough? To share abundantly? They may need to know that even if they didn’t have anything else to hand out, they can still share their smiles. Help them to see that even if they ran out of everything else, they still had their love to share. If the class is inspired to continue sharing, plan another event for the following week. The children could bring in a gently used toy to share with a local service agency. They could collect canned food from their home and their neighbors to bring to church the following week and then be given to a food pantry. Children will be full of ideas of how to share and will live this verse from Paul, being cheerful givers! By Bonnie Carenen Rev. Bonnie K. Carenen works with Church World Service Indonesia as an advisor on disaster relief and development issues and is a lifelong member of the Christian Church (Disciples of Christ). The work of One Great Hour of Sharing enables Christians to respond to situations of crisis and great need, locally and globally. However, often when people and organizations try to help people and communities in need, they make the problems worse or create new injustices. This is one legacy of missionary colonialism, for example. As Christians we are called to help God’s people who are suffering, but when we respond we must do so responsibly. How is sharing a model of faithful, responsible, response for God’s people? This activity raises youth sensitivity to political and economic disparities and reflects on individual choice and responsibility to live morally and ethically in a global context. A globe or a map of the world such as the very colorful Peters Projection Map which will make a great addition to the wall of a church youth room Foil star stickers or small stickers A sheet of paper for youth to take notes on, or blackboard/dry erase board for a volunteer to take notes Dictionary or pre-print definitions of terms (below) Part One. 15-20 minutes. Read II Corinthians 9:6-15, paying special attention to Paul’s description of sharing, providing, and giving. Next define and discuss the following terms. Definitions may be provided, looked up in a dictionary, or online with a smart phone. Sharing or share 1. To divide and parcel out in shares; apportion. 2. To participate in, use, enjoy, or experience jointly or in turns. 3. To relate (a secret or experience, for example) to another or others. 4. To accord a share in (something) to another or others Providing or provide 1. To furnish; supply 2. To make available; afford 3. To set down as a stipulation 4. Archaic To make ready ahead of time; prepare. Patronizing or patronize 1. To act as a patron to; support or sponsor. 2. To go to as a customer, especially on a regular basis. 3. To treat in a condescending manner. Exploitation or exploit 1. The act of employing to the greatest possible 2. Utilization of another person or group for selfish purposes 3. An advertising or a publicity program. 1. To take (the property of another) without right 2. To present or use (someone else's words or ideas) as one's own. 3. To get or take secretly or artfully Discuss who has power and who is vulnerable in these exchange relationships. How might people accidentally or intentionally confuse “helping others” responsibly with some of these terms Can you think of biblical, historical or personal examples of each of these categories? What happened? What kind of consequences were there? How do you as an individual, your family or your church make decisions about what “helping others” should look like? Part Two. 15-20 minutes. Set up: Lay out the map or globe where everyone has access to it. Have participants investigate the tags on all their clothes and products (include all technology, glasses, shoes, food in the room, backpacks and bags, etc.) and see where they all came from. Most everything should be labeled if you look hard enough, unless the tag has been removed. List and locate on the map where every item came from and mark it with a sticker or a dot. If you don’t have a map, list each country in its geographic region. Mark every instance a country appears, whether once or twenty times. Discuss. 15-20 minutes. What patterns emerge? Where do most clothes come from? Where do most fine goods come from? What did it take to get things from there to here? Who was involved in that process? Think about the countries that appear on your list/globe. - What do you know about them, culturally, economically, politically? - Has that country been in the news for any reason? - How difficult was it to find it on the map? If youth have smart phones, consider learning more about a couple countries (maybe the country is notable because so many items come from there, or so few, or you’ve never heard of it before now). Do a quick internet search to see what you can find out about the country and its context. What do you think is the relationship between the people who produced the item and the people who sold the item, and the people who purchase the item? Of the categories listed above, which ones may come into play, in what ways? Conclusion. 5 minutes. Consider the difference between Paul’s description of sharing and providing versus the power relationships that affect the rest of what we have, give, take, and share with others. Why do you think One Great Hour of Sharing and Week of Compassion emphasize sharing and partnership as the best way to give and receive? Invite a youth to close in prayer. Youth Activity 2 By Kathy Fuller Guisewite ActivityMost youth have ties to music in some form or another. Invite the youth to work (either individually, as teams, or as a large group) on listing as many songs or song lyrics that address the topic of joy or sharing joy. It could prove beneficial to address the fact that you are not asking for the word ‘joy’ to be present in the song, only the concept of “sharing joy.” Have some discussion around what that might look like. to get them started could include: James Taylor: “Shower the People You Love with Love” “Happy Birthday to You!” Magic Penny Song Various camp songs Current pop songs as appropriate To further elaborate on this idea, the youth could then take the generated list of songs/lyrics and put them together in a sort of story or poem to be shared in worship or simply added to the bulletin.
In May of the same year, at a church conference held in Philadelphia, Mr. Little made the acquaintance of the Kanes. They were an old and honorable Pennsyl vania family. The father, Judge John K. Kane, had been attorney general of the state of Pennsylvania; and at the time of Mr. Little's visit at his home he was United States judge for the district of Pennsylvania, also Pres ident of the American Philosophical Society. Dr. Elisha Kent Kane, the famous arctic explorer and scientist, was his son; as was also Thomas L. Kane, who afterward served with distinction as Colonel and Brigadier General in the Union Army in the war between the states. From the latter Mr. Little received a letter of introduction to Hon. Geo. M. Dallas, Vice — President of the United States. He visits Washington, said Kane's letter to Mr. Dal las, with no other object than the laudable one of desir ing aid of the government for his people. |Category||Military and War|
The Cranmoor area of central Wisconsin is the principal cranberry producing area of the State. Cranberries are grown in only about 2.5 square miles of an 80-square-mile marsh and swamp in the Cranberry Creek basin. Cranberry growers have built reservoirs and ditches throughout 25 square miles of marsh for better management of the area's natural water supply. Additional water is diverted into the basin to supplement the cranberry needs. In the 1966-67 hydrologic budget for Cranberry Creek basin, annual inputs were 27.8 inches of precipitation, 3.8 inches of surface-water diversion into the basin, and 1.1 inches decrease in stored water. Annual outputs were. 20.8 inches of evapotranspiration, 11.7 inches of runoff, and 0.2 inch of groundwater outflow. During the 1966-67 period, precipitation averaged about 3 inches per year below normal. The water used for cranberry culture is almost exclusively surface water. Efficient management of the basin's water supply, plus intermittent diversions of about 100 cubic feet per second from outside the basin, provide cranberry growers with a sufficient quantity of water. Although the quantity of surface water is adequate, the pH (generally 5.7-6.7) is slightly high for optimum use. Dissolved oxygen is slightly low, generally between 4 and 10 milligrams per liter. The water is soft; iron and manganese contents vary seasonally, being high in winter and summer and low in spring. Additional supplies of surface water can be obtained by increasing diversions from outside the basin and by increasing reservoir capacity within the basin. Ground water, although not presently used for cranberries, is available in the central, southern, and eastern parts of the basin, where the thickness of the saturated alluvium exceeds 50 feet. Well yields in these areas might be as much as 1,000 gpm (gallons per minute). Additionally, well yields of as much as 1,000 gpm may be expected from saturated alluvium southeast of Cranberry Creek basin. Where saturated alluvium is less than 50 feet thick, in the northern and western parts of the basin, well yields generally are less than 50 gpm. Ground water is also available from sandstone in the western part of the basin. Where the sandstone is thickest (about 60 ft.), well yields may be as much as 200 gpm. The quality of ground water is similar to that of surface water. The pH of water from the shallow alluvium ranges between 6.0 and 6,6; the pH of water from the deep alluvium is about 7.0. Ground water is soft to moderately hard, 22 to 88 milligrams per liter, and contains excessive amounts of iron and manganese. Additional Publication Details USGS Numbered Series Water for cranberry culture in the Cranmoor area of central Wisconsin
Нашли опечатку? Выделите ее мышкой и нажмите Ctrl+Enter Название: Encyclopedia of Psychotherapy Авторы: Hersen M. (ed.), Sledge W. (ed.) Psychotherapy is the dialogue between patient and therapist in the diagnosis and treatment of behavioral, crisis, and mental disorders. Psychoanalysis as formulated by Sigmund Freud is the first modern form of psychotherapy and this approach has given rise to several score of psychodynamic therapies. In more recent times behavioral, cognitive, existential, humanistic, and short-term therapies have been put into practice, each with a particular focus and each giving rise to variations in structure and content of treatment as well as therapeutic outcomes. These therapy approaches relate the patient/therapist dialogue to different aspects of the therapeutic process. For instance, behavior therapies focus on the patient's conduct and cognitive therapies treat the client's thought processes. The Encyclopedia covers the major psychotherapies currently in practice as well as the classical approaches that laid the foundation for the various contemporary treatment approaches. In addition, the Encyclopedia identifies the scientific studies conducted on the efficacy of the therapies and review the theoretical basis of each therapy.
November 4, 2013 Old school: U-M in History Professor Elzada Clover poses for a photographer at the start of her trip with the Nevills Expedition. Photo courtesy Special Collections Department, Marriott Library, University of Utah This year in history (75 years ago) When Professor Elzada Clover set out to explore botanical specimens along the Colorado River in 1938, she would not only discover new plants to elevate U-M’s botanical gardens and broaden collections at the Smithsonian Institution. She would also make history as the first woman to navigate the notoriously treacherous Colorado.
1. Web sites on how to design writing assignments to elicit effective student writing: 3. The HarvardWrites Instructor Toolkit is "a toolkit for assigning, teaching, and evaluating writing with in-class exercises to help students understand your feedback and improve their writing." 5. The Consortium on Graduate Communication has created the following bibliography "to support instructors who are selecting textbooks or creating or revising curricula or materials for their graduate communication courses." There are many approaches to commenting on student writing, with different effects on students and different requirements of faculty. Ideally, responding to student writing offers constructive feedback to students without being burdensome to faculty. Web sites on how to respond to student writing: Handouts & guides:
Copper urine test The copper urine test is performed by collecting urine at specific times for a 24-hour period. The urine is tested for the amount of copper present. The copper urine test is used to determine the presence of Wilson disease, a sometimes fatal condition in which the buildup of excess copper damages the liver, and eventually the kidneys, eyes and brain. Last reviewed 1/4/2013 by Chad Haldeman-Englert, MD, Wake Forest School of Medicine, Department of Pediatrics, Section on Medical Genetics, Winston-Salem, NC. Review provided by VeriMed Healthcare Network. Also reviewed by A.D.A.M. Health Solutions, Ebix, Inc., Editorial Team: David Zieve, MD, MHA, David R. Eltz, and Stephanie Slon. - The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. - A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. - Call 911 for all medical emergencies. - Links to other sites are provided for information only -- they do not constitute endorsements of those other sites.
August 21, 2012 Bicyclists: Drive Safe and Be Aware – Share the Road With the start of fall semester, Parking and Transportation Services (PTS) is urging cyclists, pedestrians and motorists to use caution when interacting with each other. As part of the Share the Road program, we are sending safety tips to both cyclists and drivers alike. PTS, in conjunction with the Lexington-Fayette Urban County Government, Fayette County Public Schools, Lexington Police and the Kentucky Transportation Cabinet, wants to promote safety on our roads. Cyclists must be aware of motorists and pedestrians. Below is a list of cycling tips that will help keep the road a safe way to travel: - Cyclists and motorists have the same rights, rules and responsibilities on Kentucky roads. - Ride a bicycle in the same direction as vehicle traffic. - Make eye contact with motorists – be sure you are seen. - Use extra caution when changing lanes, and use appropriate lanes when turning. - Use hand signals to alert motorists of your intentions when changing lanes and turning. - Ride in a straight path. Do not weave in and out of parked or stopped vehicles. - Ride a well-equipped bike with a headlight, taillight, bell, and rearview mirror. - Use the travel lane farthest to the right unless you are turning left or passing another vehicle. - Proceed with care through intersections and yield to pedestrians. For more information about sharing the road with motorists and pedestrians, visit http://sharetheroad.ky.gov. When we all drive safely and are considerate of others, it is easy to Share the Road! UK Parking & Transportation Services is on Twitter! To follow news on parking and transportation issues on campus, including bikes, go to http://twitter.com/UKParking.
Faculty Profile: Tim Kidd Making Hydrogen a Viable Fuel: The Search for Hydrogen Storage Materials Tim Kidd, assistant professor of physics, and the team he is leading are on a quest for new materials for storing hydrogen. As an energy source, hydrogen has the advantages of producing no greenhouse gases or particulates while at the same time being extremely versatile in terms of its uses. On the down side, hydrogen is hard to control and very dilute, so high-density storage systems must be developed to realize its potential. The team is investigating layered materials, so called because under a microscope they look like a stack of tissue paper. Scientists find these materials intriguing because things can be placed within the layers, or scaffolding. Kidd and his group are trying to find different ways that hydrogen can be placed in between the stacks, with the storage at a high enough density to be useful for something like a hydrogen-powered car or generator or for capturing excess energy from, say, a coal plant and storing it as hydrogen. The research team consists of three faculty members in addition to Kidd. Paul Shand (physics) has led magnetic investigations and has made some very exciting discoveries concerning the magnetic behavior of these materials. Laura Strauss (chemistry and biochemistry) has led efforts in the growth and modification of the materials, leading to the discovery of how to create these materials in the form of nanotubes. Mike Roth has performed a series of computer simulations attempting to discover the fundamental properties of these materials and how these properties relate to the measured effects seen in experiments. Together, the faculty and their many undergraduate research assistants have made great strides in understanding and developing new layered materials. One area of high interest is magnetism. The layered materials themselves are not magnetic, but magnetic materials such as iron, cobalt or nickel can be placed inside of them. The magnetic properties of the resulting layered materials can then be controlled by choosing the type and quantity of the magnetic metals inserted, with the goal of developing new types of magnets for applications in electric motors or generators. Kidd attributes the group's ability to produce unusual findings in part to the research equipment purchased through more than $1 million in grants over the past three years. This cutting-edge equipment enables the researchers to work together and combine their individual areas of expertise to efficiently develop new materials for study. Kidd anticipates that by the end of this year or the next the group will have met its objectives for the hydrogen storage project; the scientists should know if it's scientifically viable, even if they don't know if it's economically feasible. Right now the researchers are trying to make sure they have something that will work outside of the laboratory. Possible applications for systems powered using the hydrogen storage materials developed in this project include buses, forklifts or tractors for industrial use. These materials could also enhance wind and solar power generation, both of which generate energy sporadically, so the materials could store energy for later use. This is true even for nuclear and coal plants, which, to run efficiently, must produce a constant supply of energy, leading to overproduction of energy. Over the past five years, Kidd has worked with more than 30 student research assistants, mostly undergraduates, as well as a few graduate students and high school science teachers. (The latter were a part of a federal program to encourage these teachers to engage in research.) All of his student research assistants learn that scientific research is not a neat and clean process. "When something has never been done before, you're not sure what the results will be," Kidd explained. "Sometimes where you get to is much more interesting than where you thought you were going to be."
As more and more people are beginning to say YES to more natural alternatives to their health and wellness, essential oils are becoming the answer that many have been looking for. We know that, although there is definitely a place for modern medicine, we are learning that there are a number of common ailments and physical discomforts that can be managed naturally with essential oils, and this "trend" isn't going away anytime soon. So, what exactly are essential oils? There are aromatic compounds found in plants that help plants survive, ward off pests and stay healthy. These oils can be round in the roots, stems, bark, leaves, flowers and seeds of plants and have a variety of physical, mental and emotional benefits. Essential oils are very potent, meaning that they are quite strong and only need to be used in small amounts. They are much more potent than herbs, so a little goes a long way. As an example, did you know that 1 drop of peppermint oil is equivalent to 28 cups of peppermint tea? These oils are powerful, yet they can do so much! Before using essential oils, it's important to understand how to choose a good quality oil. This is important, because unfortunately, there isn't a regulating body overseeing the ingredients that go into the bottles that you see on the shelves. This means that consumers are often assuming that they are purchasing 100% "pure" essential oils, when in fact; most of the ingredients in the bottle are synthetic (fake) ingredients, toxins and fillers. A good quality essential oil is sourced from the part of the world where the plant naturally grows and thrives. The soil conditions and climate are ideal for the plant to survive, and the farmers are intimately connected with these plants, how to take care of them, when to harvest, etc. Make sure that the oil has been tested multiple times, to ensure that there are no fillers, toxins and foreign material that would compromise the oil. Third party testing ensures unbiased protocols, and the oils that travel to the consumer are safe, pure and therapeutic. High-quality oil is potent and is to be used sparingly as a result of its strength. While some brands of essential oils are "watered down" and diluted with fillers and other synthetic ingredients, real pure essential oils are full strength and don't require as many drops in order to enjoy similar effects. So, what are some of the benefits of essential oils? There are so many more benefits than what I am able to share here, but here are some of the more common ways in which these oils support health and wellness: Lessens and/or eliminates pain Supports focus and concentration Relieves sore joints and muscles Replaces household cleaners and skin care products Think about some of the daily products that you use for some of these examples. In which areas would you be open to trying a more natural approach? The great thing about these oils is that there are no side effects, just side benefits, so your entire body reaps the rewards of these beautiful oils. I encourage you to learn more about how essential oils can support a greater sense of well-being in your life and in your home. You'll be glad you did!
|Horário de funcionamento The Packard Proving Grounds, was a proving ground established in Shelby Charter Township, Michigan in 1927 by the Packard Motor Car Company of Detroit. It is listed in the National Register of Historic Places.HistoryPackard had been founded in Warren, Ohio in 1899 by brothers James Ward Packard and William Doud Packard. The company attracted several investors from Detroit, and by 1903 the Michigan investors had convinced the Packard brothers to let them relocate the young business to the emerging motor capital of Detroit.The Packard automobile quickly evolved into a superbly engineered prestige vehicle. To maintain and advance their product position, Packard’s general manager, Henry Bourne Joy, sought to establish a dedicated testing facility. Testing on local streets and roads was risky due to traffic, and could potentially expose Packard’s future product developments to curious competitors.An early attempt to locate a testing facility north and east of Detroit near the city of Mount Clemens was not approved by the Packard board of directors. A site had already been acquired by Henry Joy, but the 640acre was deemed to not have enough topographic diversity to allow for such things as hill testing. At the dawn of America’s entry into World War I, Joy leased and eventually sold the site to the U.S. Government for use as a training airfield. The main access road to what was to become Selfridge Field was named Henry B. Joy Boulevard in honor of Joy.
Taj Mahal: the monument of love. Many of us have seen the stunning image of the Taj Mahal on numerous occasions, but everyone may not be aware of its original construction purpose. In fact, the Taj Mahal is a tomb built by the emperor Sha Jahan in honour of his favourite wife following her death. Considered one of the most beautiful monuments of the world, it is not a suprise that it has been elected as one of the new Seven Wonders of the World. Without any doubt, you will be impressed by the luxury, grandiosity and the beauty of this mausoleum built more than three hundred years ago. This temple is located in Agra, along Jumna’s river. Agra is a city 200 kilometers in the south of Delhi in India. The history of the temple goes back to 1607 when a young prince of 15 years old met a 14 year old Persian woman, called Mumtaz Mahal. The young prince fell in love with Mumtaz Mahal in the market. However the strict laws of the state forbid the two from seeing each other for 5 years, until the Muslim law had changed. Thereafter, the couple married in 1612 and enjoyed 19 years of marriage. Sadly, Mumtaz Mahal died following giving birth to their 14th child. Her final wish was to have a tomb built for her. The construction of the tomb commenced in 1632 with completion in 1654. Controversy surrounds the construction of the Taj Mahal, as the Prince lost his throne during the process. In abstract, the history of love and grief has left us with one of the most beautiful monuments in the world: The Taj Mahal. The best way to arrive to this amazing monument is flying into New Delhi. Following your arrival there are a number of options you can choose from. This includes renting a car, traveling on public transport, including a bus or train. Opening hours of the Taj Mahal are from sunrise to sunset, so it varies day by day. Also the monument is closed on Fridays. When planning your visit be sure to check the website to schedule your itinerary accordingly. InsureandGo recommend that you give yourself plenty of time to arrive at the Taj Mahal to view the sunrise, as the traffic in Agra and the queues can create delays. And you don’t want to miss the amazing views at sunrise! According to the legend, the emperor intended to build a black temple with the same characteristics of the Taj Mahal in front of it to bury his body following his death. However as the emperor lost power, this construction did not progress. Eventually, when the Prince did die, his body was buried next to his wife. Also, amazingly 1000 elephants and 20 000 workers contributed to the physical construction of the Taj Mahal. Do you need travel insurance in India? Here are some great reasons why you should InsureandGo:
Scientists turn stem cells into sperm cells BY Advocate.com Editors September 17 2003 12:00 AM ET Scientists in Japan have for the first time transformed mouse embryonic stem cells into sperm cells, The Wall Street Journal reports. Researchers have previously been able to turn stem cells into human egg cells, but developing sperm cells was considered much more difficult. Writing in the online edition of the Proceedings of the National Academy of Science, researchers from the Mitsubishi Kagaku Institute of Life Science report that they were able to use embryonic stem cells and cells producing a protein that stimulates sperm development to transform the stem cells into sperm cells. The researchers then implanted the cells into male mice to expose them to male hormones to complete their development--a process that may be able to be conducted outside the body in the laboratory. The sperm cells were shown to be active and successfully fertilized mouse eggs. The research was conducted only on male stem cells, and it's not clear yet if the process will work with female stem cells, the scientists said. If research pans out that does allow the creation of human male sperm cells from female stem cells, it may be possible for lesbian couples to have children that are genetically theirs, with one partner donating an egg and the other donating stem cells that will be converted into sperm. Research is also continuing by scientists in Philadelphia and France on developing human egg cells from male stem cells, which could someday allow gay-male couples to conceive children that have the genes of both partners. - Op-ed: In The Long Run, Michael Sam Being Cut Was About Homophobia - WATCH: Gay Teen Explains What Bible Really Says - Michael Sam Passed Over By Every NFL Team - Michael Sam Not Signed to Rams Practice Squad - Time to Cancel Exxon’s Government Contracts - Editor's Letter: The Backlash Against Equality Isn't Real
There’s something about water that draws and fascinates us… We know instinctively that being by water makes us healthier, happier, reduces stress, and brings us peace. When humans think of water, or hear water, or see water, or get in water, even taste and smell water — they feel something. The sensory experience of the sand between our toes, the sound of the waves crashing down, the sight of the open, vast ocean in front of us, and the saltiness in the air always bring this feeling of adventure and aliveness – where everything seems possible. But what is it about water that makes us feel this way? Wallace Nichols, a marine biologist who has dedicated his life to understanding the effects of water on our brain and why it is so important to take care of our water systems and the environment. Nichols says we all naturally have a “blue mind” which is “a mildly meditative state characterized by calm, peacefulness, unity, and a sense of general happiness and satisfaction with life in the moment.” Alas, there is actual real science and research on the effects of water on us. Based on Nichols’ studies, here are five reasons why water does a brain good: 1.Water Is Relaxing For Our Brain All day long our brains are downloading information from the sensory world. But just like any muscle you use in your body repetitively, you need to take time to rest so it can recover properly and maintain it’s healthy condition. This is true for your brain too. Water gives your brain that much needed break. The sound and sight of water is much simpler for your brain to process than much of what you see and hear each day. City noises and television generate high levels of information for your brain to process. 2. Water Can Be Meditative For Our Brains Since the sound and sight of the ocean is easy for the brain to process, it can create a soft focus, just like when you are focused on your breath in yoga or in a mindfulness practice. Because you’re brain changes gears, it can enter into a different state of awareness. Being in this relaxed, meditative state has similar effects to mindfulness, which has been linked to lower stress levels, improved mental clarity, relief from mild anxiety, and improved mood. 3. Water Can Inspire Us To Be Better People The restful and contemplative state that the ocean has on your brain makes it easy for you to experience a state of awe. When you experience an overpowering feeling of reverence from the vastness of the ocean, and the majestic beauty around you, your brain can easily switch gears and change from an egocentric “I” orientation into a “we” orientation. When we disconnect from our separateness and tap into our universal human experience we more easily align with states of empathy and connection to others. 4. Water Can Inspire Creativity Since the sound and sight of water allows for the brain to relax, it is easy for your brain to make new neural connections. With these new connections, you may think about a certain situation from a different vantage point. You get out of your mental rut and become more creative. Quite literally, water grows your brain. 5. Exercising By The Ocean Can Be Beneficial We all know exercising is a natural and well-documented way to reduce stress and stay healthy. Both our bodies and our mood get a boost. Nichols says you may get even an extra boost from being near water or the ocean when exercising from the relaxed state (blue mind) that you’re experiencing. Exercising out in nature, especially by water, will be a different environment for your brain than being inside a gym where there is loud music playing, tv screens, and loads of people.
A small DNA-testing company that just months ago was trying to get its footing in consumer genetics is now part of an effort to make U.K. hospitals safer during the pandemic. The company, DnaNudge, won a 161-million pound ($211 million) order for 5,000 machines and a supply of cartridges to test patients for the new coronavirus in hundreds of the National Health Service hospitals. For founder Christofer Toumazou, a professor at Imperial College London, it’s the culmination of months of efforts to retool a toaster-size machine he originally developed to analyze key bits of people’s DNA so users could tailor their diet to their heredity. Now his lab-in-a-box will be used to see whether patients arriving at hospitals for surgery, cancer treatment and other procedures harbor COVID-19 — an unexpected detour in his contribution to the consumer genetics revolution. “We could be entering a very new world when we come out on the other side of this pandemic,” Toumazou said in an interview. His machine, the Nudgebox, delivers a result in 90 minutes on the spot — no need to ship samples to a lab — based on either a nose swab or some saliva. It can also identify the flu and another common lung ailment known as respiratory syncytial virus. The U.K. government this month also ordered 450,000 rapid tests from DNA testing company Oxford Nanopore Technologies. Innovative diagnostics are the latest examples of British science being deployed to fight the pandemic, along with the coronavirus vaccine being developed by the University of Oxford and AstraZeneca Plc and an Oxford study that established the life-saving potential of a cheap anti-inflammatory drug called dexamethasone. Coronavirus testing has become a sensitive topic in the U.K. after early efforts to speed up diagnosis floundered. Thousands of tests ordered last spring turned out to be flawed, preventing a scaling up of detection envisioned by Prime Minister Boris Johnson’s government. Britain has suffered more than 46,000 deaths, the most of any European country. Graham Cooke, a professor of infectious diseases at Imperial College London, said he was doubtful at first that Toumazou’s device would be useful, but that it held up well under scrutiny. “If you have someone coming in and you’re not sure if they have COVID, you can make a decision about where they should go,” he said. “You don’t want to put the wrong person in the wrong place.” Some of the Nudgeboxes ordered have already been rolled out in eight London hospitals and health-care centers, where doctors and nurses can use them to quickly determine whether new patients should be isolated. DnaNudge may go public in a year or so, according to Toumazou. As Toumazou, 59, watched the pandemic unfold and overwhelm NHS resources, his greatest worries were for his children, one of whom is immuno-compromised and would be at high risk if he caught COVID-19. But his thoughts also kept going back to the box whose technology was lying fallow as a result of the crisis. So he went to his biggest investor, former Thai prime minister and mobile-phone magnate Thaksin Sinawatra, who agreed to plow more cash into the business to fund the transition. The box went from being able to analyze human DNA to the narrower task of recognizing the genetic blueprint of the SARS-CoV-2 virus. The rejig also added a feature that ensures a proper sample has been taken — meaning it’s easy to know whether a patient needs to be retested. Toumazou said little in his background prepared him for the fields of health or science. The son of a Greek Cypriot-immigrant family that owned restaurants in England, he saw his outlook changed by an uncle who was an engineer. “He inspired me,” Toumazou recalled. “At that time, Greek families were either in restaurants or hairdressing, and my family were in catering. I wasn’t really meant for engineering.”While his school didn’t offer the exams that allow access to the U.K.’s top colleges, Toumazou enrolled in an electrical engineering program at what was then called Oxford Polytechnic. ‘Marriage in heaven’ There, he and his instructor John Lidgey began working on a new kind of circuit that drastically reduced the amount of power needed. As a research fellow at Imperial College London, he became the institution’s youngest person to be promoted to professor, at age 33. He began using the technology in a variety of applications, including mobile phones and eventually implanted prostheses for deaf children, and became interested in the connection between tech and genetics. Now Toumazou spends most of his days at NHS hospitals in London and Oxford, overseeing the use of Nudgeboxes for COVID. They’re performing hundreds of tests each day, and he still sees more opportunities for expanding applications. The devices could be used for quick testing in airports or businesses when people come down with symptoms, for example, or to quickly check volunteers for vaccine trials. And there’s also the possibility of going back to DnaNudge’s original mission — helping people match foods to their genetic predisposition — to avoid diabetes, kidney disease and other conditions that might make them more vulnerable to COVID.” My dream has been to bring testing like this to the consumer,” Toumazou said. “A test that can demystify and simplify that quickly — rather than leaving people in doubt — is going to be very useful.” In a time of both misinformation and too much information, quality journalism is more crucial than ever. By subscribing, you can help us get the story right. Your news needs your support Since the early stages of the COVID-19 crisis, The Japan Times has been providing free access to crucial news on the impact of the novel coronavirus as well as practical information about how to cope with the pandemic. Please consider subscribing today so we can continue offering you up-to-date, in-depth news about Japan.
I learned of this thanks to a tweet from @OFPC: MRSA (methicillin-resistant staphylococcus aureus) infections continue to be a growing public health issue, both hospital-acquired and community-acquired. These guidelines come from the Infectious Diseases Society of America (IDSA). The article is a 38 page document (pdf file, full reference below); the last 10 pages are supporting references. The major performance measures are: 1. The management of all MRSA infections should include identification, elimination and/or debridement of the primary source and other sites of infection when possible (eg, drainage of abscesses, removal of central venous catheters, and debridement 2. In patients with MRSA bacteremia, follow-up blood cultures 2–4 days after initial positive cultures and as needed thereafter are recommended to document clearance of bacteremia. 3. To optimize serum trough concentrations in adult patients, vancomycin should be dosed according to actual body weight (15–20 mg/kg/dose every 8–12 h), not to exceed 2 g per dose. Trough monitoring is recommended to achieve target concentrations of 15–20 lg/mL in patients with serious MRSA infections and to ensure target concentrations in those who are morbidly obese, have renal dysfunction, or have fluctuating volumes of distribution. The efficacy and safety of targeting higher trough concentrations in children requires additional study but should be considered in those with severe sepsis or persistent bacteremia. 4. When an alternative to vancomycin is being considered for use, in vitro susceptibility should be confirmed and documented in the medical record. 5. For MSSA infections, a b-lactam antibiotic is the drug of choice in the absence of allergy. Their recommended management of skin and soft-tissue infections For a cutaneous abscess incision and drainage is the primary treatment. - For simple abscesses or boils, incision and drainage alone is likely to be adequate. Simple boils most likely DON’T need antibiotics. Antibiotic therapy is recommended for abscesses associated with the following conditions: - severe or extensive disease (eg, involving multiple sites of infection) - rapid progression in presence of associated cellulitis - signs and symptoms of systemic illness - associated comorbidities or immunosuppression - extremes of age - abscess in an area difficult to drain (eg, face, hand, and genitalia) - associated septic phlebitis - lack of response to incision and drainage alone Antibiotic therapy for outpatients All antibiotic therapy should be individualized based on the patient’s clinical response. Patients with purulent cellulitis: empirical therapy for CA-MRSA is recommended pending culture results. Five to 10 days of therapy is recommended. Patients with nonpurulent cellulitis: empirical therapy for infection due to b-hemolytic streptococci is recommended. Empirical coverage for CA-MRSA is recommended in patients who do not respond to b-lactam therapy and may be considered in those with systemic toxicity. Five to 10 days of therapy is recommended. For empirical coverage of CA-MRSA in outpatients with SSTI, oral antibiotic options include the following: - clindamycin, trimethoprim-sulfamethoxazole (TMP-SMX), a tetracycline (doxycycline or minocycline), and linezolid. If coverage for both b-hemolytic streptococci and CA-MRSA is desired, options include the following: - clindamycin alone or TMP-SMX or a tetracycline in combination with a b-lactam (eg, amoxicillin) or linezolid alone. Antibiotic therapy for hospitalized patients with complicated SSTI (cSSTI) -- defined as patients with deeper soft-tissue infections, surgical/traumatic wound infection, major abscesses, cellulitis, and infected ulcers and burns). In addition to surgical debridement and broad-spectrum antibiotics, empirical therapy for MRSA should be considered pending culture data. As with outpatients, all antibiotic therapy should be individualized based on the patient’s clinical response. Options include the following: - intravenous (IV) vancomycin, oral (PO) or IV linezolid 600 mg twice daily, daptomycin 4 mg/kg/dose IV once daily, telavancin 10 mg/kg/dose IV once daily, and clindamycin 600 mg IV or PO 3 times a day. - A b-lactam antibiotic (eg, cefazolin) may be considered in hospitalized patients with nonpurulent cellulitis with modification to MRSA-active therapy if there is no clinical response. Seven to 14 days of therapy is recommended. Children with minor skin infections (such as impetigo) and secondarily infected skin lesions (such as eczema, ulcers, or lacerations), mupirocin 2% topical ointment can be used. Patient education is the “heart” of preventing recurrence. Management of recurrent MRSA SSTIs Preventive educational messages on personal hygiene and appropriate wound care are recommended for all patients with SSTI. Instructions should be provided to: Keep draining wounds covered with clean, dry bandages. Maintain good personal hygiene with regular bathing and cleaning of hands with soap and water or an alcohol-based hand gel, particularly after touching infected skin or an item that has directly contacted a draining wound. Avoid reusing or sharing personal items (eg, disposable razors, linens, and towels) that have contacted infected skin. Environmental hygiene measures should be considered in patients with recurrent SSTI in the household or community Focus cleaning efforts on high-touch surfaces (ie, surfaces that come into frequent contact with people’s bare skin each day, such as counters, door knobs, bath tubs, and toilet seats) that may contact bare skin or uncovered infections. Commercially available cleaners or detergents appropriate for the surface being cleaned should be used according to label instructions for routine cleaning of surfaces. When decolonization is deemed appropriate (ie prior to elective surgery): Nasal decolonization with mupirocin twice daily for 5–10 days. Nasal decolonization with mupirocin twice daily for 5–10 days and topical body decolonization regimens with a skin antiseptic solution (eg, chlorhexidine) for 5–14 days or dilute bleach baths. (For dilute bleach baths, 1 teaspoon per gallon of water [or ¼ cup per ¼ tub or 13 gallons of water] given for 15 min twice weekly for 3 months can be considered.) Screening cultures prior to decolonization are not routinely recommended if at least 1 of the prior infections was documented as due to MRSA. Surveillance cultures following a decolonization regimen are not routinely recommended in the absence of an active infection. Oral antimicrobial therapy is recommended for the treatment of active infection only and is not routinely recommended for decolonization. There is much more in the guidelines. I have focused only on the skin and soft-tissue areas. CAMRSA: Dx and Tx Update for Plastic Surgeons – an Article Review (January 8, 2009) Revisit of Community Acquired MRSA--Prevention Tips (October 17, 2007) Clinical Practice Guidelines by the Infectious Diseases Society of America for the Treatment of Methicillin-Resistant Staphylococcus Aureus Infections in Adults and Children; Catherine Liu, Arnold Bayer, Sara E. Cosgrove, Robert S. Daum, Scott K. Fridkin, Rachel J. Gorwitz, Sheldon L. Kaplan, Adolf W. Karchmer, Donald P. Levine, Barbara E. Murray, Michael J. Rybak, David A. Talan, and Henry F. Chambers; Clin Infect Dis. (2011) doi: 10.1093/cid/ciq146 First published online: January 4, 2011
According to the American College of Medical Genetics (ACMG), an important issue in genetic testing is defining the scope of informed consent. The obligation to counsel and obtain consent is inherent in the clinician-patient and investigator-subject relationships. In the case of most genetic tests, the patient or subject should be informed that the test might yield information regarding a carrier or disease state that requires difficult choices regarding their current or future health, insurance coverage, career, marriage, or reproductive options. The objective of informed consent is to preserve the individual's right to decide whether to have a genetic test. This right includes the right of refusal should the individual decide the potential harm (stigmatization or undesired choices) outweighs the potential benefits. DNA-based mutation analysis is not covered for routine carrier testing for the diagnosis of Tay-Sachs and Sandhoff disease. Under accepted guidelines, diagnosis is primarily accomplished through biochemical assessment of serum, leukocyte, or platelet hexosaminidase A and B levels. The literature states that mutation analysis is appropriate for individuals with persistently inconclusive enzyme-based results and to exclude pseudo-deficiency (non-disease related) mutations in carrier couples. Testing of a member who is at substantial familial risk for being a heterozygote (carrier) for a particular detectable mutation that is recognized to be attributable to a specific genetic disorder is only covered for the purpose of prenatal counseling under plans with this benefit (see CPB 0189 - Genetic Counseling). Confirmation by molecular analysis of inborn errors of metabolism by traditional screening methodologies (e.g., Guthrie microbiologic assays) is covered. Rigorous clinical evaluation should precede diagnostic molecular testing. In many instances, reliable mutation analysis requires accurate determination of specific allelic variations in a proband (affected individual in a family) before subsequent carrier testing in other at-risk family members can be accurately performed. Coverage of testing for individuals who are not Aetna members is not provided, except under the limited circumstances outlined in the policy section above. Hereditary non-polyposis colon cancer Hereditary non-polyposis colon cancer ([HNPCC], Lynch syndrome) is one of the most common cancer predisposition syndromes affecting 1 in 200 individuals and accounting for 13 to 15 % of all colon cancer. HNPCC is defined clinically by early-onset colon carcinoma and by the presence of other cancers such as endometrial, gastric, urinary tract and ovarian found in at least 3 first-degree relatives. Two genes have been identified as being primary responsible for this syndrome: hMLH1 at chromosome band 3p21 accounts for 30 % of HNPCC2,3 and hMLH2 or FCC at chromosome band 2p22 which together with hMLH1 accounts for 90 % of HNPCC. Unlike other genetic disorders that are easily diagnosed, the diagnosis of HNPCC relies on a very strongly positive family history of colon cancer. Specifically, several organizations have defined criteria that must be met to make the diagnosis of HNPCC. Although HNPCC lacks strict clinical distinctions that can be used to make the diagnosis, and therefore diagnosis is based on the strong family history, genetic testing is now available to study patient's DNA for mutations to one of the mismatch repair genes. A mutation to one of these genes is a characteristic feature and confirms the diagnosis of HNPCC. Identifying individuals with this disease and performing screening colonoscopies on affected persons may help reduce colon cancer mortality. Microsatellite instability (MSI) is found in the colorectal cancer DNA (but not in the adjacent normal colorectal mucosa) of most individuals with germline mismatch repair gene mutations. In combination with immunohistochemistry for MSH2 and MLH1, MSI testing using the Bethesda markers should be performed on the tumor tissue of individuals putatively affected with HNPCC. A result of MSI-high in tumor DNA usually leads to consideration of germline testing for mutations in the MSH2 and MLH1 genes. Individuals with MSI-low or microsatellite stable (MSS) results are unlikely to harbor mismatch repair gene mutations, and further genetic testing is usually not pursued. HNPCC is caused by germline mutation of the DNA mismatch repair genes. Over 95 % of HNPCC patients have mutations in either MLH1 or MSH2. As a result, sequencing for mismatch repair gene mutations in suspected HNPCC families is usually limited to MLH1 and MSH2 and sometimes MSH6 and PMS2. In general, MSH6 and PMS2 sequence analysis is performed in persons meeting aforementioned criteria for genetic testing for HNPCC, and who do not have mutations in either the MLH1 or MSH2 genes. In addition, single site MSH6 or PMS2 testing may be appropriate for testing family members of persons with HNPCC with an identified MSH6 or PMS2 gene mutation. HNPCC is a relatively rare disease, which makes screening the entire populace burdensome and ineffective. The incidence of this disease, even among the families of patients with colon cancer, is too small to make screening effective. (See also CPB 0189 - Genetic Counseling and CPB 0227 - BRCA Testing, Prophylactic Mastectomy, and Prophylactic Oophorectomy). Familial adenomatous polyposis (FAP) Familial adenomatous polyposis (FAP) is caused by mutation of the adenomatous polyposis coli (APC) gene. According to guidelines from the American Gastroenterological Association (AGA, 2001), adenomatous polyposis coli gene testing is indicated to confirm the diagnosis of familial adenomatous polyposis, provide pre-symptomatic testing for at-risk members (1st degree relatives 10 years or older of an affected patient), confirm the diagnosis of attenuated familial adenomatous polyposis in those with more than 20 adenomas, and test those 10 years or older at risk for attenuated FAP. The AGA guidelines state that germline testing should first be performed on an affected member of the family to establish a detectable mutation in the pedigree. If a mutation is found in an affected family member, then genetic testing of at-risk members will provide true positive or negative results. The AGA guidelines state that, if a pedigree mutation is not identified, further testing of at-risk relatives should be suspended because the gene test will not be conclusive: a negative result could be a false negative because testing is not capable of detecting a mutation even if present. When an affected family member is not available for evaluation, starting the test process with at-risk family members can provide only positive or inconclusive results. In this circumstance, a true negative test result for an at-risk individual can only be obtained if another at-risk family member tests positive for a mutation. MYH is a DNA repair gene that corrects DNA base pair mismatch errors in the genetic code before replication. Mutation of the MYH gene may result in colon cancer. In this regard, the MYH gene has been found to be significantly involved in colon cancer, both in cases where there is a clear family history of the disease, as well as in cases without any sign of a hereditary cause. The National Comprehensive Cancer Network (NCCN)'s practice guidelines on colorectal cancer screening (2006) recommended testing for MYH mutations for individuals with personal history of adenomatous polyposis (more than 10 adenomas, or more than 15 cumulative adenomas in 10 years) either consistent with recessive inheritance or with adenomatous polyposis with negative adenomatous polyposis coli (APC) mutation testing. The guideline noted that when polyposis is present in a single person with negative family history, de novo APC mutation should be tested; if negative, testing for MYH should follow. When family history is positive only for a sibling, recessive inheritance should be considered and MYH testing should be done first. In a polyposis family with clear autosomal dominant inheritance, and absence of APC mutation, MYH testing is unlikely to be informative. Members in such family are treated according to the polyposis phenotype, including classical or attenuated FAP. Factor V Leiden mutation Factor V Leiden mutation is the most common hereditary blood coagulation disorder in the United States. It is present in 5 % of the Caucasian population and 1.2 % of the African-American population. Factor V Leiden increases the risk of venous thrombosis 3 to 8 fold for heterozygous individuals and 30 to 140 fold for homozygous individuals. Factor V Leiden mutation has been associated with the following complications: According to the American College of Medical Genetics, Factor V Leiden genetic testing is indicated in the following patients: Age less than 50, any venous thrombosis; or Myocardial infarction in female smokers under age 50; or Recurrent venous thrombosis; or Relatives of individuals with venous thrombosis under age 50; or Venous thrombosis and a strong family history of thrombotic disease; or Venous thrombosis in pregnant women or women taking oral contraceptives; or Venous thrombosis in unusual sites (such as hepatic, mesenteric, and cerebral veins). The ACMG does not recommend random screening of the general population for factor V Leiden. Routine testing is also not recommended for patients with a personal or family history of arterial thrombotic disorders (e.g., acute coronary syndromes or stroke) except for the special situation of myocardial infarction in young female smokers. According to the ACMG, testing may be worthwhile for young patients (less than 50 years of age) who develop acute arterial thrombosis in the absence of other risk factors for atherosclerotic arterial occlusive disease. The ACMG does not recommend prenatal testing or routine newborn screening for factor V Leiden mutation. The ACMG does not recommend general screening for factor V Leiden mutation before administration of oral contraceptives. The ACMG recommends targeted testing prior to oral contraceptive use in women with a personal or family history of venous thrombosis. Factor V Leiden screening of asymptomatic individuals with other recognized environmental risk factors, such as surgery, trauma, paralysis, and malignancy is not necessary or recommended by the ACMG, since all such individuals should receive appropriate medical prophylaxis for thrombosis regardless of carrier status. When Factor V Leiden testing is indicated, the ACMG recommends either direct DNA-based genotyping or factor V Leiden-specific functional assay (e.g., activated protein C (APC) resistance). Patients who test positive by a functional assay should then be further studied with the DNA test for confirmation and to distinguish heterozygotes from homozygotes. According to the ACMG, patients testing positive for factor V Leiden or APC resistance should be considered for molecular genetic testing for prothrombin 20210A, the most common thrombophilia with overlapping phenotype for which testing is easily and readily available. The prothrombin 20210A mutation is the second most common inherited clotting abnormality, occurring in 2 % of the general population. It is only a mild risk factor for thrombosis, but may potentiate other risk factors (such as Factor V Leiden, oral contraceptives, surgery, trauma, etc.). A factor V gene haplotype (HR2) defined by the R2 polymorphism (A4070G) may confer mild APC resistance and interact with the factor V Leiden mutation to produce a more severe APC resistance phenotype (Bernardi et al, 1997; de Visser et al, 2000; Mingozzi et al, 2003). In one study, co-inheritance of the HR2 haplotype increased the risk of venous thromboembolism associated with factor V Leiden by approximately 3-fold (Faioni et al, 1999). However, double heterozygosity for factor V Leiden and the R2 polymorphism was not associated with a significantly higher risk of early or late pregnancy loss than a heterozygous factor V Leiden mutation alone (Zammiti et al, 2006). Whether the HR2 haplotype alone is an independent thrombotic risk factor is still unclear. Several studies have suggested that the HR2 haplotype is associated with a 2-fold increase in risk of venous thromboembolism (Alhenc-Gelas et al, 1999; Jadaon and Dashti, 2005). In contrast, other studies (de Visser 2000; Luddington et al, 2000; Dindagur et al, 2006) found no significant increase in thrombotic risk (GeneTests, University of Washington, Seattle, 2007). CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) is a rare, genetically inherited, congenital vascular disease of the brain that causes strokes, subcortical dementia, migraine-like headaches, and psychiatric disturbances. CADASIL is very debilitating and symptoms usually surface around the age of 45. Although CADASIL can be treated with surgery to repair the defective blood vessels, patients often die by the age of 65. The exact incidence of CADASIL in the United States is unknown. DNA testing for CADASIL is appropriate for symptomatic patients who have a family history consistent with an autosomal dominant pattern of inheritance of this condition. Clinical signs and symptoms of CADASIL include stroke, cognitive defects and/or dementia, migraine, and psychiatric disturbances. DNA testing is also indicated for pre-symptomatic patients where there is a family history consistent with an autosomal dominant pattern of inheritance and there is a known mutation in an affected member of the family. This policy is consistent with guidelines on CADASIL genetic testing from the European Federation of Neurological Societies. Cystic fibrosis is the most common potentially fatal autosomal recessive disease in the United States. It is characterized by chronic progressive disease of the respiratory system, malabsorption due to pancreatic insufficiency, increased loss of sodium and chloride in sweat, and male infertility as a consequence of atresia of the vas deferens. Pulmonary disease is the most common cause of mortality and morbidity in individuals with CF. The incidence of this disease ranges from 1:500 in Amish (Ohio) to 1:90,000 in Hawaiian Orientals, and is estimated to be 1:2,500 newborns of European ancestry. It occurs less frequently in people with other ethnic and racial backgrounds. About 1:25 persons of European ancestry is a carrier (or heterozygote), possessing one normal and one abnormal CF gene. Because of recent advances in clinical management of CF, babies born today are expected to live well into middle age. Currently, the most frequently employed test for CF is the quantitative pilocarpine iontophoresis sweat test. Sweat chloride is more reliable than sweat sodium for diagnostic purposes with a sensitivity of 98 % and a specificity of 83 %. However, this test can not detect CF carriers because the electrolyte content of sweat is normal in heterozygotes (Wallach, 1991). The gene for CF (cystic fibrosis trans-membrane conductance regulator, CFTR) was cloned, and the principal mutant gene in white people (DF508) was characterized in 1989. This mutation is due to a 3-base-pair deletion that results in the loss of a phenylalanine at position 508 from the 1,480-amino acid coding region (Riordan et al, 1989). This mutation is found in approximately 70 % of carriers of European ancestry, but the relative frequency varies from 30 % in Ashkenazi Jews to 88 % in Danes (Cutting et al, 1992). Available evidence indicates that CFTR functions as a chloride channel, although it may also serve other functions. Since then, more than 200 CF mutations have been described. Five of the most common mutations (DF508, G542X, F551D, R553X, N1303K) constitute approximately 85 % of the alleles in the United States (Elias et al, 1991). Thus, screening procedures that test for these 5 mutations will detect approximately 85 % of CF carriers. The genetic screening test for CF is usually based on mouthwash samples collected by agitating sucrose or saline in the mouth. The DNA of these cells are amplified, digested, and subjected to separation techniques that identify 3 to 5 common mutations. A National Institutes of Health consensus panel (1997) recommended that genetic testing for CF should be offered to adults with a positive family history of CF, to partners of people with the disease, to couples currently planning a pregnancy, and to couples seeking prenatal testing. However, the panel did not recommend genetic testing of CF to the general public or to newborn infants. The American College of Obstetricians and Gynecologists (2001) has issued similar recommendations on genetic carrier testing for CF. ACOG recommends that obstetricians should offer CF screening to: Couples in whom one or both members are white and who are planning a pregnancy or seeking prenatal care; Individuals with a family history of CF; and Reproductive partners of people who have CF. ACOG also recommends that screening should be made available to couples in other racial and ethnic groups. To date, over 900 mutations in the CF gene have been identified. As it is impractical to test for every known mutation, the ACMG Accreditation of Genetic Services Committee has compiled a standard screening panel of 25 CF mutations, which represents the standard panel that ACMG recommends for screening in the U.S. population (Grody et al, 2001). This 25-mutation panel incorporates all CF-causing mutations with an allele frequency of greater than or equal to 0.1 % in the general U.S. population, including mutation subsets shown to be sufficiently predominant in certain ethnic groups, such as Ashkenazi Jews and African Americans. This standard panel of mutations is intended to provide the greatest pan-ethnic detectability that can practically be performed. The ACOG's update on carrier screening for CF (2011) provided the following recommendations. - If a patient has been screened previously, CF screening results should be documented but the test should not be repeated. - Complete analysis of the CFTR gene by DNA sequencing is not appropriate for routine carrier screening. Fragile X syndrome Fragile X syndrome is the most common cause of inherited mental retardation, seen in approximately one in 1,200 males and one in 2,500 females. Phenotypic abnormalities associated with Fragile X syndrome include mental retardation, autistic behaviors, characteristic narrow face with large jaw, and speech and language disorders. Fragile X syndrome was originally thought to be transmitted in an X-linked recessive manner; however, the inheritance pattern of fragile X syndrome has been shown to be much more complex. Standard chromosomal analysis does not consistently demonstrate the cytogenetic abnormality in patients with fragile X syndrome, and molecular diagnostic techniques (DNA testing) have become the diagnostic procedure of choice for fragile X syndrome. Aetna's policy on coverage of fragile X genetic testing is based on guidelines fromm the ACMG (1994) and the ACOG (1995). Lactase-phlorizin hydrolase, which hydrolyzes lactose, the major carbohydrate in milk, plays a critical role in the nutrition of the mammalian neonate (Montgomery et al, 1991). Lactose intolerance in adult humans is common, usually due to low levels of small intestinal lactase. Low lactase levels result from either intestinal injury or (in the majority of the world's adult population) alterations in the genetic expression of lactase. Although the mechanism of decreased lactase levels has been the subject of intensive investigation, no consensus has yet emerged. The LactoTYPE Test (Prometheus Laboratories) is a blood test that is intended to identify patients with genetic-based lactose intolerance. According to the manufacturer, this test provides a more definitive diagnosis and scientific explanation for patients with persistent symptoms. There is insufficient evidence that the assessment of the genetic etiology of lactose intolerance would affect the management of patients such that clinical outcomes are improved. Current guidelines on the management of lactose intolerance do not indicate that genetic testing is indicated (NHS, 2005; National Public Health Service for Wales, 2005). Long QT syndrome Voltage-gated sodium channels are transmembrane proteins that produce the ionic current responsible for the rising phase of the cardiac action potential and play an important role in the initiation, propagation, and maintenance of normal cardiac rhythm. Inherited mutations in the sodium channel alpha-subunit gene (SCN5A), the gene encoding the pore-forming subunit of the cardiac sodium channel, have been associated with distinct cardiac rhythm syndromes such as the congenital long QT3 syndrome (LQT3), Brugada syndrome, isolated conduction disease, sudden unexpected nocturnal death syndrome (SUNDS), and sudden infant death syndrome (SIDS). Electrophysiological characterization of heterologously expressed mutant sodium channels have revealed gating defects that, in many cases, can explain the distinct phenotype associated with the rhythm disorder. The long QT syndrome (LQTS) is a familial disease characterized by an abnormally prolonged QT interval and, usually, by stress-mediated life-threatening ventricular arrhythmias (Priori et al, 2001). Characteristically, the first clinical manifestations of LQTS tend to appear during childhood or in teenagers. Two variants of LQTS have been described: a rare recessive form with congenital deafness (Jervell and Lange-Nielsen syndrome, J-LN), and a more frequent autosomal dominant form (Romano-Ward syndrome, RW). Five genes encoding subunits of cardiac ion channels have been associated to LQTS and genotype-phenotype correlation has been identified. Of the 5 genetic variants of LQTS currently identified, LQT1 and LQT2 subtypes involve 2 genes, KCNQ1 and HERG, which encode major potassium currents. LQT3 involves SCN5A, the gene encoding the cardiac sodium current. LQT5 and LQT6 are rare subtypes also involving the major potassium currents. The principal diagnostic and phenotypic hallmark of LQTS is abnormal prolongation of ventricular repolarization, measured as lengthening of the QT interval on the 12-lead ECG (Maron et al, 1998). This is usually most easily identified in lead II or V1, V3, or V5, but all 12 leads should be examined and the longest QT interval used; care should also be taken to exclude the U wave from the QT measurement. LQT3 appears to be the most malignant variant and may be the one less effectively managed by beta blockers. LQT1 and LQT2 have a higher frequency of syncopal events but their lethality is lower and the protection afforded by beta-blockers, particularly in LQT1, is much higher. The Jervell and Lange-Nielsen recessive variant is associated with very early clinical manifestations and a poorer prognosis than the Romano-Ward autosomal dominant form. The presence of syndactyly seems to represent a different genetic variant of LQTS also associated with a poor prognosis. Guidelines on sudden cardiac death from the European College of Cardiology (Priori et al, 2001) state that identification of specific genetic variants of LQTS are useful in risk stratification. The clinical variants presenting association of the cardiac phenotype with syndactyly or with deafness (Jervell and Lange-Nielsen syndrome) have a more severe prognosis. Genetic defects on the cardiac sodium channel gene (SCN5A) are also associated with higher risk of sudden cardiac death. In addition, identification of specific genetic variants may help in suggesting behavioral changes likely to reduce risk. LQT1 patients are at very high risk during exercise, particularly swimming. LQT2 patients are quite sensitive to loud noises, especially when they are asleep or resting. Genetic testing for LQTS may be indicated in persons with close relatives that have a defined mutation. Genetic testing may also be indicated in individuals with a prolonged QT interval on resting electrocardiogram (a corrected QT interval (QTc) of 470 msec or more in males and 480 msec or more in females) without an identifiable external cause for QTc prolongation. Common external causes of QTc prolongation are listed in the table below. |Table: Common External Causes of Prolongation of QTc Interval |Heart disease (heart failure, ischemia) |Antiarrhythmic medications (quinidine, procainamide, amiodarone, sotalol, and dofetilide) |Tricyclic and tetracyclic antidepressants (e.g., amitriptyline) Genetic testing for long QT syndrome has not been evaluated in patients who present with a borderline QT interval, suspicious symptoms (e.g., syncope), and no relevant family history (Roden, 2008). In these patients, the incidence of false positive and false negative results and their implications for management remain unknown. Genetic testing may also be necessary in person with long QT syndrome in sudden death close relatives. Brugada syndrome is an inherited condition comprising a specific EKG abnormality and an associated risk of ventricular fibrillation and sudden death in the setting of a structurally normal heart. Brugada syndrome is characterized by ST-segment abnormalities on EKG and a high risk of ventricular arrhythmias and sudden death. Brugada syndrome presents primarily during adulthood but age at diagnosis ranges from 2 days to 85 years. Clinical presentations may also include sudden infant death syndrome and sudden unexpected nocturnal death syndrome, a typical presentation in individuals from Southeast Asia. Brugada et al (2005) reported that Brugada syndrome and LQTS are both due to mutations in genes encoding ion channels and that the genetic abnormalities causing Brugada syndrome have been linked to mutations in the ion channel gene SCN5A. Brugada stated that the syndrome has been identified only recently but an analysis of data from published studies indicates that the disease is responsible for 4 to 12 % of unexpected sudden deaths, and up to 50 % of all sudden death in patients with an apparently normal heart. Brugada explained that Brugada syndrome is a clinical diagnosis based on syncopal or sudden death episodes in patients with a structurally normal heart and a characteristic ECG pattern. The ECG shows ST segment elevation in the primordial leads V1-V3, with a morphology of the QRS complex resembling a right bundle branch block; this pattern may also be caused by J point elevation. When ST elevation is the most prominent feature, the pattern is called "coved-type". When the most prominent feature is J point elevation, without ST elevation the pattern is called "saddle-type". Brugada pointed out that it is important to exclude other causes of ST segment elevation before making the diagnosis of Brugada syndrome. Brugada syndrome is inherited in an autonomic dominant manner with variable penetrance. Most individuals diagnosed with Brugada syndrome have an affected parent. The proportion of cases caused by de novo mutations is estimated at 1 %. Each child of an individual with Brugada syndrome has a 50 % chance of inheriting the mutation. According to Brugada, antiarrhythmic drugs do not prevent sudden death in symptomatic or asymptomatic individuals with Brugada syndrome and that implantation of an automatic cardioverter-defibrillator is the only currently proven effective therapy. To date the great majority of identified disease-causing mutations have been located in the SCN5A gene encoding the a subunit of the human cardiac voltage-gated sodium channel but such mutations can be identified in, at most, 30 % of affected people. Moreover, a positive genetic test adds little or nothing to the clinical management of such a person (HRUK, 2007). The identification of an SCN5A mutation does, of course, allow screening of family members but the usefulness of genetic screening may be less than for other familial syndromes, however, given that the routine 12-lead EKG (with or without provocative drug testing) appears to be a relatively effective method of screening for the condition. Hypertrophic cardiomyopathy (HCM) is a disease of the myocardium in which a portion of the myocardium is hypertrophied without any obvious cause; it is among the most common genetically transmitted cardiovascular diseases. The genetic abnormalities that cause HCM are heterogeneous. Hypertrophic cardiomyopathy is most commonly due to a mutation in one of 9 genes that results in a mutated protein in the sarcomere. Some of the genes responsible for HCM have not yet been identified, and among those genes that have been identified, the spectrum of possible disease-causing mutations is incomplete. As a result, a thorough evaluation of known genes requires extensive DNA sequencing, which is onerous for routine clinical testing. Less rigorous methods (such as selective sequencing) reduces the likelihood of identifying the responsible mutation. Population studies have demonstrated that some patients are compound heterozygotes (inheriting 2 different mutations within a single HCM gene), double heterozygotes (inheriting mutations in 2 HCM genes), or homozygotes (inheriting the same mutation from both parents). To be certain of detecting such genotypes, sequencing of candidate genes would need to continue in a given patient even after a single mutation was identified. In many persons with HCM mutations, the disease can be mild and the symptoms absent or minimal. In addition, phenotypic expression of HCM can be influenced by factors other than the basic genetic defect, and the clinical consequences of the genetic defect can vary. There is sufficient heterogeneity in the clinical manifestations of a given gene mutation that, even when a patient's mutation is known, his or her clinical course can not be predicted with any degree of certainty. In addition, the prognostic impact of a given mutation may relate to a particular family and not to the population at large. Many families have their own "private" mutations and thus knowledge of the gene abnormalities can not be linked to experience from other families. Family members with echocardiography evidence of HCM should be managed like other patients with HCM. In general, genetically affected but phenotypically normal family members should not be subjected to the same activity restriction as patients with HCM. Bos and colleagues (2009) stated that over the past 20 years, the pathogenic basis for HCM, the most common heritable cardiovascular disease, has been studied extensively. Affecting about 1 in 500 persons, HCM is the most common cause of sudden cardiac death (SCD) among young athletes. In recent years, genomic medicine has been moving from the bench to the bedside throughout all medical disciplines including cardiology. Now, genomic medicine has entered clinical practice as it pertains to the evaluation and management of patients with HCM. The continuous research and discoveries of new HCM susceptibility genes, the growing amount of data from genotype-phenotype correlation studies, and the introduction of commercially available genetic tests for HCM make it essential that cardiologists understand the diagnostic, prognostic, and therapeutic implications of HCM genetic testing. Hudecova et al (2009) noted that the clinical symptoms of HCM are partly dependent on mutations in affected sarcomere genes. Different mutations in the same gene can present as malign with a high-risk of SCD, while other mutations can be benign. The clinical symptomatology can also be influenced by other factors such as the presence of polymorphisms in other genes. Currently, the objective of intensive clinical research is to access the contribution of molecular genetic methods in HCM diagnostics as well as in risk stratification of SCD. It is expected that genetic analyses will have an important consequence in the screening of the relatives of HCM patients and also in the prenatal diagnostics and genetic counseling. Shephard and Semsarian (2009) stated that genetic heart disorders are an important cause of SCD in the young. While pharmacotherapies have made some impact on the prevention of SCD, the introduction of implantable cardioverter-defibrillator (ICD) therapy has been the single major advance in the prevention of SCD in the young. In addition, the awareness that most causes of SCD in the young are inherited, means family screening of relatives of young SCD victims allows identification of previously unrecognised at-risk individuals, thereby enabling prevention of SCD in relatives. The role of genetic testing, both in living affected individuals as well as in the setting of a "molecular autopsy", is emerging as a key factor in early diagnosis of an underlying cardiovascular genetic disorder. The Heart Failure Society of America's practice guideline on "Genetic evaluation of cardiomyopathy" (Hershberger et al, 2009) stated that genetic testing is primarily indicated for risk assessment in at-risk relatives who have little or no clinical evidence of cardiovascular disease. Genetic testing for HCM should be considered for the one most clearly affected person in a family to facilitate family screening and management. Specific genes available for testing for HCM include MYH7, MYBPC3, TNNT2, TNN13, TPM1, ACTC, MYL2, and MYL3. MYH7 and MYBPC each accounts for 30 % to 40 % of mutations; TNNT2 for 10 % to 20 %. Genetic cause can be identified in 35 % to 45 % overall; up to 60 % to 65 % when the family history is positive. The BlueCross BlueShield Association Technology Evaluation Center (TEC)'s assessement on genetic testing for predisposition to inherited HCM (2010) concluded that the use of genetic testing for inherited HCM meets the TEC criteria for individuals who are at-risk for development of HCM, defined as having a close relative with established HCM, when there is a known pathogenic gene mutation present in an affected relative. In order to inform and direct genetic testing for at-risk individuals, genetic testing should be initially performed in at least 1 close relative with definite HCM (index case) if possible. This testing is intended to document whether a known pathologic mutation is present in the family, and optimize the predictive value of predisposition testing for at-risk relatives. Due to the complexity of genetic testing for HCM and the potential for misinterpretation of results, the decision to test and the interpretation of test results should be performed by, or in consultation with an expert in the area of medical genetics and/or HCM. The TEC assessment also concluded that genetic testing for inherited HCM does not meet the TEC criteria for predisposition testing in individuals who are at-risk for development of HCM, defined as having a close relative with established HCM, when there is no known pathogenic gene mutation present in an affected relative. This includes: - Patients with a family history of HCM, with unknown genetic status of affected relatives; and - Patients with a family history of HCM, when a pathogenic mutation has not been identified in affected relatives. Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) Arrhythmogenic right ventricular dysplasia/cardiomyopathy is a condition characterized by progressive fibro-fatty replacement of the myocardium that predisposes individuals to ventricular tachycardia and sudden death. The prevalence of ARVD/C is estimated to be 1 case per 10,000 population. Familial occurrence with an autosomal dominant pattern of inheritance and variable penetrance has been demonstrated. Recessive variants have been reported. It is estimated that half of the individuals have a family history of ARVD/C and the remaining cases are new mutations. Genetic testing has not been demonstrated to be necessary to establish the diagnosis of ARVD/C or determine its prognosis. Twelve-lead ECG and echocardiography can be used to identify affected relatives. The genetic abnormalities that cause ARVD/C are heterogeneous. The genes frequently associated with ARVD/C are PKP2 (plakophilin-2), DSG2 (desmoglein-2), and DSP (desmoplakin). A significant proportion of ARVD/C cases have been reported with no linkage to known chromosomal loci; in one report, 50 % of families undergoing clinical and genetic screening did not show linkage with any known genetic loci (Corrado et al, 2000). Most affected individuals live a normal lifestyle. Management of individuals with ARVD/C is complicated by incomplete information on the natural history of the disease and the variability of disease expression even within families. High-risk individuals with signs and symptoms of ARVD/C are treated with anti-arrhythmic medications and those at highest risk who have been resuscitated or who are unresponsive to or intolerant of anti-arrhythmic therapy may be considered for an ICD. According to the Heart Failure Society of America's Practice Guideline on the genetic evaluation of cardiomyopathy (2009), the clinical utility for all genetic testing of cardiomyopathies remains to be defined. The guideline stated, "[b]ecause the genetic knowledge base of cardiomyopathy is still emerging, practitioners caring for patients and families with genetic cardiomyopathy are encouraged to consider research participation." The Multidisciplinary Study of Right Ventricular Dysplasia (North American registry) is a 5-year study funded by the National Institutes of Health to determine how the genes responsible for ARVD/C affect the onset, course, and severity of the disease. Enrollment in the study was completed in May 2008 and the study is currently in the follow-up period. Catecholaminergic polymorphic ventricular tachycardia (CPVT) Catecholaminergic polymorphic ventricular tachycardia (CPVT) is a highly lethal form of inherited arrhythmogenic disease characterized by adrenergically mediated polymorphic ventricular tachycardia (Liu et al, 2007). Mutations in the cardiac ryanodine receptor (RyR2) gene and the cardiac calsequestrin (CASQ2) gene are responsible for the autosomal dominant and recessive variants of CPVT, respectively. The clinical presentation encompasses exercise- or emotion-induced syncopal events and a distinctive pattern of reproducible, stress-related, bi-directional ventricular tachycardia in the absence of both structural heart disease and a prolonged QT interval. CPVT typically begins in childhood or adolescence. The mortality rate in untreated individuals is 30 to 50 % by age 40 years. Clinical evaluation by exercise stress testing and Holter monitoring and genetic screening can facilitate early diagnosis. Beta-blockers are the most effective drugs for controlling arrhythmias in CPVT patients, yet about 30 % of patients with CPVT still experience cardiac arrhythmias on beta-blockers and eventually require an implantable cardioverter defibrillator. Liu et al (2008) stated that molecular genetic screening of the genes encoding the cardiac RyR2 and CASQ2 is critical to confirm uncertain diagnosis of CPVT. Katz et al (2009) noted that CPVT is a primary electrical myocardial disease characterized by exercise- and stress-related ventricular tachycardia manifested as syncope and sudden death. The disease has a heterogeneous genetic basis, with mutations in the cardiac RyR2 gene accounting for an autosomal-dominant form (CPVT1) in approximately 50 % and mutations in the cardiac CASQ2 gene accounting for an autosomal-recessive form (CPVT2) in up to 2 % of CPVT cases. Both RyR2 and calsequestrin are important participants in the cardiac cellular calcium homeostasis. These researchers reviewed the physiology of the cardiac calcium homeostasis, including the cardiac excitation contraction coupling and myocyte calcium cycling. Although the clinical presentation of CPVT is similar in many respects to the LQTS, there are important differences that are relevant to genetic testing. CPVT appears to be a more malignant condition, as many people are asymptomatic before the index lethal event and the majority of cardiac events occur before 20 years of age. Affected people are advised to avoid exercise-related triggers and start prophylactic beta-blockers with dose titration guided by treadmill testing. Genetic testing has been recommended in individuals with clinical features considered typical of CPVT following expert clinical assessment (HRUK, 2008). Clinically the condition is difficult to diagnose in asymptomatic family members as the ECG and echocardiogram are completely normal at rest. Exercise stress testing has been advised in family members in order to identify exercise-induced ventricular arrhythmias, but the sensitivity of this clinical test is unknown. Although the diagnostic yield from genetic testing is less than that for the LQTS (about 50 %) in patients with typical clinical features, a positive genetic test may be of value for the individual patient (given the prognostic implications) and for screening family members (given the difficulties in clinical screening methods) (HRUK, 2008). The RyR2 gene is large and a ‘‘targeted’’ approach is usually undertaken, in which only exons that have been previously implicated are examined. The 2006 guidelines from the American College of Cardiology on management of patients with ventricular arrhythmias and the prevention of sudden cardiac death (Zipes et al, 2006) included the following recommendations for patients with CPVT: - There is evidence and/or general agreement supporting the use of beta blockers for patients clinically diagnosed on the basis of spontaneous of documented stress-induced ventricular arrhythmias. - There is evidence and/or general agreement supporting the use of an implantable ICD in combination with beta blockers for survivors of cardiac arrest who have a reasonable expectation of survival with a good functional capacity for more than 1 year. - The weight of evidence and/or opinion supports the use of beta blockers in patients without clinical manifestations who are diagnosed in childhood based upon genetic analysis. - The weight of evidence and/or opinion supports the use of an ICD in combination with beta blockers for patients with a history of syncope and/or sustained ventricular tachycardia while receiving beta blockers who have a reasonable expectation of survival with a good functional capacity for more than 1 year. - The usefulness and/or efficacy of beta blockers is less well established in patients without clinical evidence of arrhythmias who are diagnosed in adulthood based upon genetic analysis. Hemochromatosis, a condition involving excess accumulation of iron, can lead to iron overload, which in turn can result in complications such as cirrhosis, diabetes, cardiomyopathy, and arthritis (Burke 1992; Hanson et al, 2001). Hereditary hemochromatosis (HHC) is characterized by inappropriately increased iron absorption from the duodenum and upper intestine, with consequent deposition in various parenchymal organs, notably the liver, pancreas, joints, heart, pituitary gland and skin, with resultant end-organ damage (Limdi and Crampton, 2004). Clinical features may be non-specific and include lethargy and malaise, or reflect target organ damage and present with abnormal liver tests, cirrhosis, diabetes mellitus, arthropathy, cardiomyopathy, skin pigmentation and gonadal failure. Early recognition and treatment (phlebotomy) is essential to prevent irreversible complications such as cirrhosis and hepatocellular carcinoma. HHC is an autosomal recessive condition associated with mutations of the HFE gene. Two of the 37 allelic variants of the HFE gene, C282Y and H63D, are significantly correlated with HHC. C282Y is the more severe mutation, and homozygosity for the C282Y genotype accounts for the majority of clinically penetrant cases. Hanson et al (2001) reported that homozygosity for the C282Y mutation has been found in 52 to 100 % of previous studies on clinically diagnosed index cases. Five percent of HHC probands were found by Hanson et al to be compound heterozygotes (C282Y/H63D), and 1.5 % were homozygous for the H63D mutation; 3.6 % were C282Y heterozygotes, and 5.2 % were H63D heterozygotes. In 7 % of cases, C282Y and H63D mutations were not present. In the general population, the frequency of the C282Y/C282Y genotype is 0.4 %. HHC is a very common genetic defect in the Caucasian population. C282Y heterozygosity ranges from 9.2 % in Europeans to nil in Asian, Indian subcontinent, African, Middle Eastern, Australian and Asian populations (Hanson et al, 2001). The H63D carrier frequency is 22 % in European populations. Accurate data on the penetrance of the different HFE genotypes are not available. But current data suggest that clinical disease does not develop in a substantial proportion of people with this genotype. Available data suggest that up to 38 % to 50 % of C282Y homozygotes may develop iron overload, with up to 10 % to 33 % eventually developing hemochromatosis-associated morbidity (Whitlock et al, 2006). A pooled analysis found that patients with the HFE genotypes C282Y/H63D and H63D/H63D are also at increased risk for iron overload, yet overall, disease is likely to develop in fewer than 1 % of people with these genotypes (Burke, 1992). Thus, DNA-based tests for hemochromatosis identify a genetic risk rather than the disease itself. Environmental factors such as diet and exposure to alcohol or other hepatotoxins may modify the clinical outcome in patients with hemochromatosis, and variations in other genes affecting iron metabolism may also be a factor. As a result, the clinical condition of iron overload is most reliably diagnosed on the basis of biochemical evidence of excess body iron (Burke, 1992). Whether it is beneficial to screen asymptomatic people for a genetic risk of iron overload is a matter of debate. To date, population screening for HHC is not recommended because of uncertainties about optimal screening strategies, optimal care for susceptible persons, laboratory standardization, and the potential for stigmatization or discrimination (Hanson et al, 2001; Whitlock et al, 2006). A systematic evidence review prepared for the U.S. Preventive Services Task Force concluded: "Research addressing genetic screening for hereditary hemochromatosis remains insufficient to confidently project the impact of, or estimate the benefit from, widespread or high-risk genetic screening for hereditary hemochromatosis" (Whitlock et al, 2006). Familial nephrotic syndrome (NPHS1, NPHS2) Nephrotic syndrome comes in 2 variants: (i) those sensitive to treatment with immunosuppressants (steroid-sensitive), and (ii) those resistant to immunosuppressants (steroid-resistant). Familial forms of nephrotic syndrome are steroid resistant (Niaudet, 2007). Mutations in two genes, NPHS1 and NPHS2, have been associated with a familial nephrotic syndrome. Mutations in the gene for podocin, called NPHS2, also known as familial focal glomerulosclerosis, are observed in patients with both familial and sporadic steroid-resistant nephrotic syndrome (SRNS). Identifying children with nephrotic syndrome due to NPHS2 mutations can avoid unnecessary exposure to immunosuppressive therapy, because immunosuppressive therapy has not been shown to be effective in treating these children (Niaudet, 2007). Thus, authorities have recommended testing for such mutations in those with a familial history of steroid resistant nephrotic syndrome and children with steroid-resistant disease . Some have suggested that, to avoid unnecessary exposure to steroid therapy, all children with a first episode of the nephrotic syndrome should be screened for NPHS2 mutations (Niaudet, 2007). However, given that over 85 % of children with idiopathic nephrotic syndrome are steroid-sensitive and only approximately 20 % of steroid-resistant patients have NPHS2 mutations, screening for abnormalities at this genetic locus would identify less than 5 % of all cases. However, screening a child with a first episode of the nephrotic syndrome with a familial history of steroid-resistant nephrotic syndrome has been recommended because they are at increased risk for having a NPHS2 gene mutation. Mutations in the gene for nephrin, called NPHS1, cause the congenital nephrotic syndrome of Finnish type (CNF) (Niaudet, 2007). CNF is inherited as an autosomal recessive trait, with both sexes being involved equally. There are no manifestations of the disease in heterozygous individuals. Most infants with the CNF are born prematurely (35 to 38 weeks), with a low birth weight for gestational age. Edema is present at birth or appears during the first week of life in 50 % of cases. Severe nephrotic syndrome with marked ascites is always present by 3 months. End-stage renal failure usually occurs between 3 and 8 years of age. Prolonged survival is possible with aggressive supportive treatment, including dialysis and renal transplantation. The nephrotic syndrome in CNF is always resistant to corticosteroids and immunosuppressive drugs, since this is not an immunologic disease (Niaudet, 2007). Furthermore these drugs may be harmful due to affected individuals' already high susceptibility to infection. The CNF becomes manifest during early fetal life, beginning at the gestation age of 15 to 16 weeks. The initial symptom is fetal proteinuria, which leads to a more than 10-fold increase in the amniotic fluid alpha-fetoprotein (AFP) concentration (Niaudet, 2007). A parallel, but less important increase in the maternal plasma AFP level is observed. These changes are not specific, but they may permit the antenatal diagnosis of CNF in high risk families in which termination of the pregnancy might be considered. However, false positive results do occur, often leading to abortion of healthy fetuses. Genetic linkage and haplotype analyses may diminish the risk of false positive results in informative families (Niaudet, 2007). The 4 major haplotypes, which cover 90 % of the CNF alleles in Finland, have been identified, resulting in a test with up to 95 % accuracy. Authorities do not recommend screening for NPHS1 mutations for all children with the first episode of nephrotic syndrome, for the reasons noted above regarding NPHS2 mutation screening. However, genetic testing may be indicated for infants with congenital nephrotic syndrome (i.e., appearing within the first months of life) who are of Finnish descent and/or who have a family history that suggests a familial cause of congenital nephrotic syndrome. The primary purpose of this testing is for pregnancy planning. Detection of an NPHS1 mutation also has therapeutic implications, as such nephrotic syndrome is steroid resistant. Primary dystonia (DYT-1) Dystonia consists of repetitive, patterned, twisting, and sustained movements that may be either slow or rapid. Dystonic states are classified as primary, secondary, or psychogenic depending upon the cause (Jankovic, 2007). By definition, primary dystonia is associated with no other neurologic impairment, such as intellectual, pyramidal, cerebellar, or sensory deficits. Cerebral palsy is the most common cause of secondary dystonia. Primary dystonia may be sporadic or inherited (Jankovic, 2007). Cases with onset in childhood usually are inherited in an autosomal dominant pattern. Many patients with hereditary dystonia have a mutation in the TOR1A (DYT1) gene that encodes the protein torsinA, an ATP-binding protein in the 9q34 locus. The role of torsinA in the pathogenesis of primary dystonia is unknown. DNA testing for the abnormal TOR1A gene can be performed on individuals with dystonia. The purpose of such testing is to help rule out secondary or psychogenic causes of dystonia, and for family planning purposes. An estimated 8 to 12 % of persons with melanoma have a family history of the disease, but not all of these individuals have hereditary melanoma (Tsao and Haluska, 2007). In some cases, the apparent familial inheritance pattern may be due to clustering of sporadic cases in families with common heavy sun exposure and susceptible skin type. A melanoma susceptibility locus has been identified on chromosome 9p21; this has been designated CDKN2A (also known as MTS1 (multiple tumor suppressor 1)) (Tsao and Haluska, 2007). There is a variable rate of CDKN2A mutations in patients with hereditary melanoma. The risk of CDKN2A mutation varies from approximately 10 % for families with at least 2 relatives having melanoma, to more than 40 % for families having multiple affected 1st degree relatives spanning several generations. Persons at increased risk of melanoma are managed with close clinical surveillance and education in risk-reduction behavior (e.g., sun avoidance, sunscreen use). It is unclear how CDKN2A genetic test information would alter clinical recommendations (Tsao and Haluska, 2007). The negative predictive value of a negative test for a CDKN2A mutation is also not established since many familial cases occur in the absence of CDKN2A mutations. It is estimated that the prevalence of CDKN2A mutation carriers is less than 1 % in high incidence populations. Thus, no mutations will be identifiable in the majority of families presenting to clinical geneticists. The American Society of Clinical Oncology (ASCO) has issued a consensus report on the utility of genetic testing for cancer susceptibility (ASCO, 1996), and recommendations for the process of genetic testing were updated in 2003 (ASCO, 2003). The report notes that the sensitivity and specificity of the commercially available test for CDKN2A mutations are not fully known. Because of the difficulties with interpretation of the genetic tests, and because test results do not alter patient or family member management, ASCO recommends that CDKN2A testing be performed only in the context of a clinical trial. The Scottish Intercollegiate Guidelines Network (SIGN, 2003) protocols on management of cutaneous melanoma reached similar conclusions, stating that "[g]enetic testing in familial or sporadic melanoma is not appropriate in a routine clinical setting and should only be undertaken in the context of appropriate research studies." The Melanoma Genetics Consortium recommends that genetic testing for melanoma susceptibility should not be offered outside of a research setting (Kefford et al, 2002). They state that “[u]ntil further data become available, however, clinical evaluation of risk remains the gold standard for preventing melanoma. First-degree relatives of individuals at high risk should be engaged in the same programmes of melanoma prevention and surveillance irrespective of the results of any genetic testing.” Charcot-Marie Tooth disease type 1A (PMP-22) Charcot Marie Tooth disease, also known as peroneal muscular atrophy, progressive neural muscular atrophy, as well as hereditary motor and sensory neuropathy, is 1 of the 3 major types of hereditary neuropathy. With an estimated prevalence of at least 1:2,500 (autosomal dominance), CMT is one of the most common genetic neuromuscular disorders affecting approximately 125,000 persons in the United States. This hereditary peripheral neuropathy is genetically and clinically heterogeneous. It is usually inherited in an autosomal dominant manner, and occasionally in an autosomal recessive manner. Sporadic as well as X-linked cases have also been reported. In the X-linked recessive patterns, only males develop the disease, although females who inherit the defective gene can pass the disease onto their sons. In the X-linked dominant pattern, an affected mother can pass on the disorder to both sons and daughters, while an affected father can only pass it onto his daughters. The clinical manifestations can vary greatly in severity and age of onset. The clinical features may be so mild that they may be undetectable by patients, their families and physicians. Charcot-Marie-Tooth disease is usually diagnosed by an extensive physical examination, assessing characteristic weakness in the foot, leg, and hand, as well as deformities and impaired function in walking and manual manipulation. The clinical diagnosis is then confirmed by electromyogram and nerve conduction velocity tests, and sometimes by biopsy of muscle and of sural cutaneous nerve. Since CMT is a hereditary disease, family history can also help to confirm the diagnosis. Based on studies of motor nerve conduction velocity, CMT can be further classified into 2 types: (i) CMT Type I -- slow conduction velocity (less than 40 meters/second for the median nerve or less than 15 meters/second for the peroneal nerve) which accounts for 70 % of all CMT cases, and (ii) CMT Type II -- normal or near normal nerve conduction velocity with decreased amplitude which accounts for the remaining 30 % of CMT cases. Charcot Marie Tooth Type I disease is a demyelinating neuropathy with hypertrophic changes in peripheral nerves, and has its onset usually during late childhood. On the other hand, CMT Type II is a non-demyelinating neuronal disorder without hypertrophic changes, and has its onset generally during adolescence. Both CMT Types I and II are characterized by a slow degeneration of peripheral nerves and roots, resulting in distal muscle atrophy commencing in the lower extremities, and affecting the upper extremities several years later. Symptoms include foot drop or clubfoot, paresthesia in legs, sloping gait, later weakness and atrophy of hands, then arms, absence or reduction of deep tendon reflexes, and occasionally mild sensory loss. Charcot Marie Tooth disease is not a fatal disorder. It does not shorten the normal life expectancy of patients, and it does not affect them mentally. As stated earlier, there is a wide range of variation in the clinical manifestations of CMT -- the degree of severity can vary considerably from patient to patient, even among affected family members within the same generation. The condition can range from having no problems to having major difficulties in ambulation in early adult life, however, the latter is unusual. Most patients are able to ambulate and have gainful employment until old age. Currently, there is no specific treatment for this disease. Management of the majority of patients with CMT disease consists of supportive care with emphasis on proper bracing, foot care, physical therapy and occupational counseling. For example, the legs and shoes can be fitted with light braces and springs, respectively, to overcome foot drop. If foot drop is severe and the disease has become stationary, the ankle can be stabilized by arthrodeses. The underlying genetic basis for CMT Type I has been characterized. A point mutation in the PMP22 gene which encodes a peripheral myelin protein with an apparent molecular weight of 22,000 or a DNA duplication of a specific region 5 megabases) including the PMP22 gene in the proximal short arm of chromosome 17 (band 17p11.2-p12) has been identified in 70 % of clinically diagnosed patients --- CMT Type IA. Thus, patients with CMT Type IA represent approximately 50 % of all CMT cases. Other CMT Type I patients (CMT Type IB) exhibit an abnormality (Duffy locus) in the proximal long arm of chromosome number 1 (band 1q21-22). Presently, no test is available for the dominant CMTIB gene on chromosome 1. On the other hand, a CMT Type IA DNA test is available commercially. The test is accomplished through a blood sample analysis -- DNAs are extracted from leukocytes of patients and pulsed-field gel electrophoresis is employed to isolate large segments of DNA encompassing CMTIA duplication-specific junction fragments which are then detected by hybridization with aCMTIA duplication-specific probe (CMTIA-REP). This probe identifies the homologous regions that flank the CMTIA duplication monomer unit. A positive CMTIA DNA test means the presence of a 500 kilobases CMTIA duplication specific junction fragment, and is diagnostic for CMT Type IA. A negative CMT Type IA means the absence of the CMTIA duplication specific junction fragment, and does not rule out a diagnosis of CMT disease. This is because patients with CMT Type IA represent approximately 50 % of all CMT cases. The value of this molecular test in family planning is questionable because of its relatively low detection rate and its inability to predict the severity of the disease. Moreover, it is likely that there are undiscovered CMTI genes since there are dominant CMTI pedigrees who do not have abnormalities at the known chromosome 1 and 17 locations (CMT Type IC). In addition, other investigators have reported X-linked forms of CMTI at the region of Xq13-21, and Xq26. Since CMT is not life-threatening, rarely severely disabling, and has no specific treatment, it is unclear how the results of this CMT Type I DNA test, which can not predict the severity of the disease, would affect family planning. Moreover, because of its low detection rate, the CMT Type I DNA test appears to be inferior to the conventional means of diagnosis through physical examination, family history, electromyography and nerve conduction velocity studies. Thus, the sole value of genetic testing for CMTIA is to establish the diagnosis and to distinguish this from other causes of neuropathy. Familial amyotrophic lateral sclerosis (SOD1 Mutation) Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease involving both the upper motor neurons (UMN) and lower motor neurons (LMN). UMN signs include hyperreflexia, extensor plantar response, increased muscle tone, and weakness in a topographical representation. LMN signs include weakness, muscle wasting, hyporeflexia, muscle cramps, and fasciculations. In the early stage of the disease, the clinical aspects of ALS can vary. Affected individuals typically present with asymmetric focal weakness of the extremities (stumbling or poor handgrip) or bulbar findings (dysarthria, dysphagia). Other findings include muscle fasciculations, muscle cramps, and lability of affect but not necessarily mood. Regardless of initial symptoms, atrophy and weakness eventually affect other muscles. Approximately 5,000 people in the U.S. are diagnosed with AML each year. Most people with ALS have a form of the condition that is described as sporadic or non-inherited. The cause of sporadic ALS is largely unknown but probably involves a combination of genetic and environmental factors. About 10 % of people with ALS have a familial form of the condition, which is caused by an inherited genetic mutation, usually as an autosomal dominant trait. The mean age of onset of ALS in individuals with no known family history is 56 years and in familial ALS it is 46 years. The diagnosis of ALS is based on clinical features, electrodiagnostic testing (EMG), and exclusion of other health conditions with related symptoms. At present, genetic testing in ALS has no value in making the diagnosis. The only genetic test currently available detects the SOD1 mutation. Since only 20 % of familial ALS patients will test positively for an SOD1 mutation, this test has limited value in genetic counseling. Migrainous vertigo is a term used to describe episodic vertigo in patients with a history of migraines or with other clinical features of migraine. Approximately 20 to 33 % of migraine patients experience episodic vertigo. The underlying cause of migrainous vertigo is not very well understood. There are no confirmatory diagnostic tests or susceptible genes associated with migrainous vertigo. Other conditions, specifically Meniere's disease and structural and vascular brainstem disease, must be excluded (Black, 2006). At this time, there are no susceptibility genes that have been unequivocally associated with prostate cancer predisposition. Genetic testing for prostate cancer is currently available only within the context of a research study. A special report on prostate cancer genetics by the BlueCross BlueShield Association Technology Evaluation Center (BCBSA, 2008) stated that single-nucleotide polymorphisms (SNPs) do not predict certainty of disease, nor do they clearly predict aggressive versus indolent disease. The report noted that, while the monitoring of high-risk men may improve outcomes, it is also possible that these could be offset by the harms of identifying and treating additional indolent disease. Type 2 diabetes Available evidence has shown that screening for a panel of gene variants associated with type 2 diabetes does not substantially improve prediction of risk for the disease than an assessment based on traditional risk factors. Available evidence suggests that both genetic and environmental factors play a role in the development of type 2diabetes. Recent genetic studies have identified 18 gene variants that appear to increase the risk for type 2 diabetes. A study reported in the New England Journal of Medicine evaluated the potential utility of genetic screening in predicting future risk of type 2 diabetes (Meigs et al, 2008). The investigators analyzed records from the Framingham Offspring Study, which follows a group of adult children of participants of the original Framingham Heart study, to evaluate risk factors for the development of cardiovascular disease, including diabetes. Full genotype results for the 18 gene variants as well as clinical outcomes were available for 2,377 participants, 255 of whom developed type 2 diabetes during 28 years of follow-up. Each participant was assigned a genotype score, based on the number of risk-associated gene copies inherited. The investigators compared the predictive value of the genotype score to that of family history alone or of physiological risk factors. Overall, the genetic score was 17.7 among those who developed diabetes and 17.1 among those who did not. The investigators found that, while the genetic score did help predict who would develop diabetes, once other known risk factors were taken into consideration, it offered little additional predictive power. The investigators concluded that: "[t]he genotype score resulted in the appropriate risk reclassification of, at most, 4 % of the subjects, compared with risk estimates based on age, sex, blood lipids, body mass index, family history, and other standard risk factors." The investigators reported that "[o]ur findings underscore the view that identification of adverse phenotypic characteristics remains the cornerstone of approaches to predicting the risk of type 2 diabetes," the authors said. A similar study among Swedish and Finnish patients, published in the same issue of the New England Journal of Medicine, also found only a small improvement in risk estimates when genetic factors were added to traditional risk factors (Lyssenko et al, 2008). The OncoVue breast cancer risk test The OncoVue breast cancer risk test (Intergenetics, Inc., Oklahoma City, OK) is a genetic-based breast cancer risk test that incorporates both individualized genetic-based single nucleotide polymorphisms (SNPs) and personal history measures to arrive at an estimate of a woman’s breast cancer risk at various stages in her life. Cells that are collected from the inside of the cheek are analyzed using thousands of proprietary (Intergenetic, Inc.) combinations of multiple genes. The genetic information and the data from the medical history are combined to assign a numeric value that tells a woman's lifetime risk of developing breast cancer. Her OncoVue risk test will tell her if she is standard, moderate or high risk for developing breast cancer during each stage of her life. OncoVue is based on an un-published case-controlled associative study that examined common genetic polymorphisms and medical history variables. Currently, 117 common polymorphisms (mostly SNPs) located in over 87 genes believed to alter breast cancer risk are examined. Most result in amino acid changes in the proteins encoded by the genes in which they occur. The medical history variables include answers to questions concerning women’s reproductive histories, family histories of cancer and a few other questions related to general health. There are no published controlled studies on the OncoVue breast cancer risk test in the peer-reviewed medical literature. Gail (2009) evaluated the value of adding SNP genotypes to a breast cancer risk model. Criteria that are based on 4 clinical or public health applications were used to compare the National Cancer Institute's Breast Cancer Risk Assessment Tool (BCRAT) with BCRATplus7, which includes 7 SNPs previously associated with breast cancer. Criteria included number of expected life-threatening events for the decision to take tamoxifen, expected decision losses (in units of the loss from giving a mammogram to a woman without detectable breast cancer) for the decision to have a mammogram, rates of risk re-classification, and number of lives saved by risk-based allocation of screening mammography. For all calculations, the following assumptions were made: Hardy-Weinberg equilibrium, linkage equilibrium across SNPs, additive effects of alleles at each locus, no interactions on the logistic scale among SNPs or with factors in BCRAT, and independence of SNPs from factors in BCRAT. Improvements in expected numbers of life-threatening events were only 0.07 % and 0.81 % for deciding whether to take tamoxifen to prevent breast cancer for women aged 50 to 59 and 40 to 49 years, respectively. For deciding whether to recommend screening mammograms to women aged 50 to 54 years, the reduction in expected losses was 0.86 % if the ideal breast cancer prevalence threshold for recommending mammography was that of women aged 50 to 54 years. Cross-classification of risks indicated that some women classified by BCRAT would have different classifications with BCRATplus7, which might be useful if BCRATplus7 was well calibrated. Improvements from BCRATplus7 were small for risk-based allocation of mammograms under costs constraints. The author reported that the gains from BCRATplus7 were small in the applications examined and that models with SNPs, such as BCRATplus7, have not been validated for calibration in independent cohort data. The author concluded that additional studies are needed to validate a model with SNPs and justify its use. There is insufficient evidence on the effectiveness of the OncoVue breast cancer risk test in determining a woman’s breast cancer risk at various stages in her life. The phosphatase and tensin homolog (PTEN) gene test Phosphatase and tensin homolog (PTEN) hamartoma tumor syndrome is an autosomal dominant group of disorders with significant clinical overlap, most notably predisposition to hamartomatous polyposis of the gastro-intestinal tract. Laurent-Puig et al (2009) stated that the occurrence of KRAS mutation is predictive of non-response and shorter survival in patients treated by anti-epidermal growth factor receptor (anti-EGFR) antibody for metastatic colorectal cancer (mCRC), leading the European Medicine Agency to limit its use to patients with wild-type KRAS tumors. However, only 50 % of these patients will benefit from treatment, suggesting the need to identify additional biomarkers for cetuximab-based treatment efficacy. These investigators retrospectively collected tumors from 173 patients with mCRC. All but 1 patient received a cetuximab-based regimen as second-line or greater therapy. KRAS and BRAF status were assessed by allelic discrimination. EGFR amplification was assessed by chromogenic in situ hybridization and fluorescent in situ hybridization, and the expression of PTEN was assessed by immunochemistry. In patients with KRAS wild-type tumors (n = 116), BRAF mutations (n = 5) were weakly associated with lack of response (p = 0.063) but were strongly associated with shorter progression-free survival (p < 0.001) and shorter overall survival (OS; p < 0.001). A high EGFR polysomy or an EGFR amplification was found in 17.7 % of the patients and was associated with response (p = 0.015). PTEN null expression was found in 19.9 % of the patients and was associated with shorter OS (p = 0.013). In multi-variate analysis, BRAF mutation and PTEN expression status were associated with OS. The authors concluded that BRAF status, EGFR amplification, and cytoplasmic expression of PTEN were associated with outcome measures in KRAS wild-type patients treated with a cetuximab-based regimen. They stated that more studies in clinical trial cohorts are needed to confirm the clinical utility of these markers. Siena et al (2009) noted that the monoclonal antibodies panitumumab and cetuximab that target the EGFR have expanded the range of treatment options for mCRC. Initial evaluation of these agents as monotherapy in patients with EGFR-expressing chemotherapy-refractory tumors yielded response rates of approximately 10 %. The realization that detection of positive EGFR expression by immunostaining does not reliably predict clinical outcome of EGFR-targeted treatment has led to an intense search for alternative predictive biomarkers. Oncogenic activation of signaling pathways downstream of the EGFR, such as mutation of KRAS, BRAF, or PIK3CA oncogenes, or inactivation of the PTEN tumor suppressor gene is central to the progression of colorectal cancer. Tumor KRAS mutations, which may be present in 35 % to 45 % of patients with colorectal cancer, have emerged as an important predictive marker of resistance to panitumumab or cetuximab treatment. In addition, among colorectal tumors carrying wild-type KRAS, mutation of BRAF or PIK3CA or loss of PTEN expression may be associated with resistance to EGFR-targeted monoclonal antibody treatment, although these additional biomarkers require further validation before incorporation into clinical practice. Additional knowledge of the molecular basis for sensitivity or resistance to EGFR-targeted monoclonal antibodies will allow the development of new treatment algorithms to identify patients who are most likely to respond to treatment and could also provide rationale for combining therapies to overcome primary resistance. The use of KRAS mutations as a selection biomarker for anti-EGFR monoclonal antibody (e.g., panitumumab or cetuximab) treatment is the first major step toward individualized treatment for patients with mCRC. Epsilon-sarcoglycan gene (SCGE) deletion analysis Myoclonus-dystonia (M-D), an autosomal dominant inherited movement disorder, has been associated with mutations in the epsilon-sarcoglycan gene (SCGE) on 7q21. Raymond et al (2008) noted that M-D due to SGCE mutations is characterized by early onset myoclonic jerks, often associated with dystonia. Penetrance is influenced by parental sex, but other sex effects have not been established. In 42 affected individuals from 11 families with identified mutations, these researchers found that sex was highly associated with age at onset regardless of mutation type; the median age onset for girls was 5 years versus 8 years for boys (p < 0.0097). Moreover, the authors found no association between mutation type and phenotype. Ritz et al (2009) stated that various mutations within the SGCE gene have been associated with M-D, but mutations are detected in only about 30 % of patients. The lack of stringent clinical inclusion criteria and limitations of mutation screens by direct sequencing might explain this observation. Eighty-six M-D index patients from the Dutch national referral center for M-D underwent neurological examination and were classified according to previously published criteria into definite, probable and possible M-D. Sequence analysis of the SGCE gene and screening for copy number variations were performed. In addition, screening was carried out for the 3 bp deletion in exon 5 of the DYT1 gene. Based on clinical examination, 24 definite, 23 probable and 39 possible M-D patients were detected. Thirteen of the 86 M-D index patients carried a SGCE mutation: 7 nonsense mutations, 2 splice site mutations, 3 missense mutations (2 within 1 patient) and 1 multi-exonic deletion. In the definite M-D group, 50 % carried an SGCE mutation and 1 single patient in the probable group (4 %). One possible M-D patient showed a 4 bp deletion in the DYT1 gene (c.934_937delAGAG). The authors concluded that mutation carriers were mainly identified in the definite M-D group. However, in 50 % of definite M-D cases, no mutation could be identified. Home genetic tests Walker (2010) stated that according to an undercover investigation by the Government Accountability Office (GAO), home genetic tests often provide incomplete or misleading information to consumers. For the GAO investigation, investigators purchased 10 tests each from 4 different direct-to-consumer genetic tests companies: 23andMe, deCode Genetics, Navigenics, and Pathway Genomics. Five saliva donors each sent 2 DNA samples to each company. In one sample, the donor used his or her real personal and medical information, and for the second sample, they developed faux identifying and medical information. The results, according to the GAO, were far from precise. For example, a donor was told by a company that he had a "below average" risk of developing hypertension, but a second company rated his risk as "average", while a third company, using DNA from the same donor, said the sample revealed an "above average" risk for hypertension. In some cases, the results conflicted with the donor's real medical condition. None of the genetic tests currently offered to consumers has undergone FDA pre-market review. Familial Cold Autoinflammatory Syndrome Familial cold autoinflammatory syndrome (FCAS), also known as familial cold urticaria (FCU), is an autosomal dominant condition characterized by rash, conjunctivitis, fever/chills and arthralgias elicited by exposure to cold -- sometimes temperatures below 22° C (72° F). It is rare and is estimated as having a prevalence of 1 per million people and mainly affects Americans and Europeans. Familial cold autoinflammatory syndrome is one of the cryopyrin-associated periodic syndromes (CAPS) caused by mutations in the CIAS1/NALP3 (also known as NLRP3) gene at location 1q44. Familial cold autoinflammatory syndrome shares symptoms, and should not be confused, with acquired cold urticaria, a more common condition mediated by different mechanisms that usually develop later in life and are rarely inherited. There is insufficient evidence to support the use of genetic testing in the management of patients with FCAS/FCU. UpToDate reviews on "Cold urticaria" (Maurer, 2011) and "Cryopyrin-associated periodic syndromes and related disorders" (Nigrovic, 2011) do not mention the use of genetic testing. Santome Collazo et al (2010) noted that congenital adrenal hyperplasia (CAH) is not an infrequent genetic disorder for which mutation-based analysis for CYP21A2 gene is a useful tool. An UpToDate review on "Diagnosis of classic congenital adrenal hyperplasia due to 21-hydroxylase deficiency" (Merke, 2011) states that "[g]enetic testing also can be used to evaluate borderline cases. Genetic testing detects approximately 95 percent of mutant alleles". Furthermore, the Endocrine Society's clinical practice guideline on congenital adrenal hyperplasia (Speiser et al, 2010) suggested genotyping only when results of the adrenocortical profile following cosyntropin stimulation test are equivocal or for purposes of genetic counseling. The Task Force recommends that genetic counseling be given to parents at birth of a CAH child, and to adolescents at the transition to adult care. Wappler (2010) stated that malignant hyperthermia (MH)-susceptible patients have an increased risk during anesthesia. The aim of this review was to present current knowledge about pathophysiology and triggers of MH as well as concepts for safe anesthesiological management of these patients. Trigger substances and mechanisms have been well-defined to date. Anesthesia can be safely performed with i.v. anesthetics, nitrous oxide, non-depolarizing muscle relaxants, local anesthetics as well as xenon. Attention must be directed to the preparation of the anesthetic machine because modern work-stations need longer cleansing times than their predecessors. Alternatively, activated charcoal might be beneficial for elimination of volatile anesthetics. Day case surgery can be performed in MH-susceptible patients, if all safety aspects are regarded. Whether there is an association between MH susceptibility and other disorders is still a matter of debate. The authors concluded that the incidence of MH is low, but the prevalence can be estimated as up to 1: 3,000. Because MH is potentially lethal, it is relevant to establish management concepts for peri-operative care in susceptible patients. This includes pre-operative genetic and in-vitro muscle contracture test (IVCT), preparation of the anesthetic work-station, use of non-triggering anesthetics, adequate monitoring, availability of sufficient quantities of dantrolene and appropriate post-operative care. Taking these items into account, anesthesia can be safely performed in susceptible patients. Moreover, an UpToDate review on "Susceptibility to malignant hyperthermia" (Litman, 2011) states that "the contracture test is performed at specific centers around the world (four in the United States). Following testing, the referring physician receives a report indicating whether testing was positive, negative, or equivocal. Positive or equivocal results should be followed-up with genetic testing. Referral information can be found on the Malignant Hyperthermia Association of the United States (MHAUS) website". Genetic testing for MH is indicated in the following groups: - Patients with a positive or equivocal contracture test to determine the presence of a specific mutation. - Individuals with a positive genetic test for MH in a family member. - Patients with a clinical history suspicious for MH (acute MH episode, masseter muscle rigidity, post-operative myoglobinuria, heat or exercise induce rhabdomyolysis) who are unable or unwilling to undergo contracture testing. Licis et al (2011) stated that sleep-walking is a common and highly heritable sleep disorder. However, inheritance patterns of sleep-walking are poorly understood and there have been no prior reports of genes or chromosomal localization of genes responsible for this disorder. These researchers described the inheritance pattern of sleep-walking in a 4-generation family and identified the chromosomal location of a gene responsible for sleep-walking in this family. A total of 9 affected and 13 unaffected family members of a single large family were interviewed and DNA samples collected. Parametric linkage analysis was performed. Sleep-walking was inherited as an autosomal dominant disorder with reduced penetrance in this family. Genome-wide multi-point parametric linkage analysis for sleep-walking revealed a maximum logarithm of the odds score of 3.44 at chromosome 20q12-q13.12 between 55.6 and 61.4 cM. The authors described the first genetic locus for sleep-walking at chromosome 20q12-q13.12; and concluded that sleep-walking may be transmitted as an autosomal dominant trait with reduced penetrance. In an editorial that accompanied the afore-metnioned study, Dogu and Pressman (2011) noted that "[a]ccording to currently accepted evidence-based theories, the occurrence of sleepwalking requires genetic predisposition, priming factors such as severe sleep deprivation or stress, and, in addition, a proximal trigger factor such as noise or touch. These factors form the background for a “perfect storm,” all of which must occur before a sleepwalking episode will occur. Hereditary factors likely play an important role, with recessive and multifactorial inheritance patterns having been reported. A recent genetic study has shown that the HLADQB1*05 Ser74 variant is a major susceptibility factor for sleepwalking in familial cases, but this finding has yet to be replicated. Another study attempted to find a causal relationship between sleepwalking and sleep-disordered breathing in cosegregated families of both disorders. However, this study was limited by the absence of molecular data .... The current diagnosis of sleepwalking is based almost entirely on clinical history. There are no objective, independent means of confirming the diagnosis. Additionally, treatment of sleepwalking is symptomatic, aimed at suppressing arousal or reducing deep sleep. identification of causative genes may eventually permit development of an independent test and treatments aimed at the underlying causes of this disorder". RetnaGene AMD (Sequenom Center for Molecular Medicine) is a laboratory developed genetic test to assess the risk of developing choroidal neovascularization (CNV), the wet form of age-related macular degeneration (AMD), a common eye disorder of the elderly that can lead to blindness. The test identifies at-risk Caucasians, age 60 and older. A report of the American Academy of Ophthalmology (Stone, et al., 2012) recommends avoidance of routine genetic testing for genetically complex disorders like age-related macular degeneration and late-onset primary open-angle glaucoma until specific treatment or surveillance strategies have been shown in one or more published clinical trials to be of benefit to individuals with specific disease-associated genotypes. The report recommends that, in the meantime, genotyping of such patients should be confined to research studies. The report stated that complex disorders (e.g., age-related macular degeneration and glaucoma) tend to be more common in the population than monogenic diseases, and the presence of any one of the disease-associated variants is not highly predictive of the development of disease. The report stated that, in many cases, standard clinical diagnostic methods like biomicroscopy, ophthalmoscopy, tonography, and perimetry will be more accurate for assessing a patient’s risk of vision loss from a complex disease than the assessment of a small number of genetic loci. The report said that genetic testing for complex diseases will become relevant to the routine practice of medicine as soon as clinical trials can demonstrate that patients with specific genotypes benefit from specific types of therapy or surveillance. The report concluded that, until such benefit can be demonstrated, the routine genetic testing of patients with complex eye diseases, or unaffected patients with a family history of such diseases, is not warranted. Central core disease (CCD) also known as central core myopathy and Shy-Magee syndrome, is an inherited neuromuscular disorder characterized by central cores on muscle biopsy and clinical features of a congenital myopathy. Prevalence is unknown but the condition is probably more common than other congenital myopathies. CCD typically presents in infancy with hypotonia and motor developmental delay and is characterized by predominantly proximal weakness pronounced in the hip girdle; orthopedic complications are common and malignant hyperthermia susceptibility (MHS) is a frequent complication. Malignant hyperthermia (MH) or malignant hyperpyrexia is a rare but severe pharmacogenetic disorder that occurs when patients undergoing anesthesia experience a hyperthermic reaction when exposed to certain anesthetic agents. Anesthetic agents that may trigger MH are desflurane, enflurane, halothane, isoflurane, sevoflurane, and suxamethonium chloride. MH usually occurs in the operating theater, but can occur at anytime during anesthesia and up to an hour after discontinuation. CCD and MHS are allelic conditions both due to (predominantly dominant) mutations in the skeletal muscle ryanodine receptor (RYR1) gene, encoding the principal skeletal muscle sarcoplasmic reticulum calcium release channel (RyR1). Altered excitability and/or changes in calcium homeostasis within muscle cells due to mutation-induced conformational changes of the RyR protein are considered the main pathogenetic mechanism(s). The diagnosis of CCD is based on the presence of suggestive clinical features and central cores on muscle biopsy; muscle MRI may show a characteristic pattern of selective muscle involvement and aid the diagnosis in cases with equivocal histopathological findings. Mutational analysis of the RYR1 gene may provide genetic confirmation of the diagnosis. Further evaluation of the underlying molecular mechanisms may provide the basis for future rational pharmacological treatment. The reference standard test for establishing a clinical diagnosis of MHS is the caffeine halothane contracture test (CHCT) in the United States, and the in vitro contracture test (IVCT) in Europe and Australasia. The CHCT and IVCT are similar and measure the muscle contracture in the presence of the anesthetic halothane and caffeine. Both tests categorize patients as being MHS, MH equivocal (MHE), or MH negative (MHN). These tests are invasive and must be performed using a skeletal muscle biopsy that is < 5 hours old. Sequence variants in the ryanodine receptor 1 (skeletal) (RYR1) gene have been shown to be associated with MH susceptibility (MHS) and are found in up to 80% of patients with confirmed MH, usually with an autosomal dominant pattern of inheritance. Although additional genetic loci have been associated with MH, the contribution of these other loci to MH is low. Genetic testing for RYR1 sequence variants from commercial providers is performed by polymerase chain reaction (PCR) followed by direct sequencing. Genetic tests for RYR1 sequence variants can be performed to either identify sequence variants in genetic hot spots of the RYR1 gene that cover all exons on which causative MH variants can be found, or for screening of sequence variants across the entire 106 exons of the RYR1 gene. Examples of commercially available tests are: Malignant Hyperthermia/Central Core Disease (570-572) RYR1 Sequencing (Prevention Genetics); Malignant hyperthermia (RYR1 gene sequenced analysis, partial) (University of Pittsburgh Medical Center, Division of Molecular Diagnostics [UPMC Molecular Diagnostics]). Hereditary Hemorrhagic Telangiectasia Hereditary hemorrhagic telangiectasia (HHT), also called Osler-Weber-Rendu syndrome, is an autosomal dominant trait disorder that results in the development of multiple abnormalities in the blood vessels. Some arterial vessels flow directly into veins rather than into the capillaries resulting in arteriovenous malformations. When they occur in vessels near the surface of the skin, where they are visible as red markings, they are known as telangiectases (the singular is telangiectasia). Nosebleeds are very common in people with HHT, and more serious problems may arise from hemorrhages in the brain, liver, lungs, or other organs. Forms of HHT include type 1, type 2, type 3, and juvenile polyposis/hereditary hemorrhagic telangiectasia syndrome. People with type 1 tend to develop symptoms earlier than those with type 2, and are more likely to have blood vessel malformations in the lungs and brain. Type 2 and type 3 may be associated with a higher risk of liver involvement. Women are more likely than men to develop blood vessel malformations in the lungs with type 1, and are also at higher risk of liver involvement with both type 1 and type 2. Individuals with any form of hereditary hemorrhagic telangiectasia, however, can have any of these problems. Genetic testing utilizes a blood test to determine whether or not an at risk individual carries the genes responsible for the development of disease. Mutations in two genes, endoglin and ALK-1, have been shown to be responsible for pure HHT, with the disease subtypes designated HHT1 and HHT2. Mutations in Smad4 result in a juvenile polyposis-HHT overlap syndrome. In 2010, Shah and group wrote that hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant disorder with age-dependent penetrance characterized by recurrent epistaxis, mucocutaneous telangiectasias, and visceral arteriovenous malformations (AVMs). AVMs can occur in multiple organs, including brain, liver, and lungs, and are associated with a large portion of disease morbidity. Pulmonary AVMs (PAVMs) can be asymptomatic or manifest as dyspnea and hypoxemia secondary to shunting. The presence of untreated PAVMs can also lead to transient ischemic attacks, stroke, hemothorax, and systemic infection, including cerebral abscesses. Definitive diagnosis is made when three or more clinical findings are present, which include the features mentioned above and a first-degree relative diagnosed with HHT. Diagnosis is suspected when two findings are present. Genetic testing can help confirm diagnosis. Mutations in three genes are known to cause disease: ENG, ACVRL1, and SMAD4. Genetic testing involves sequence and duplication/deletion analysis and identifies a mutation in roughly 80% of patients with clinical disease. The textbook Flint: Cummings Otolaryngology: Head & Neck Surgery (2010) states that genetic testing is available for prenatal diagnosis of hereditary hemorrhagic telangiectasia. This is important, because catastrophic hemorrhage can occur in children with clinically silent disease, thus screening imaging for cerebral and pulmonary arteriomalformations is indicated in children who have a family history. According to the textbook of Feldman: Sleisenger and Fordtran's Gastrointestinal and Liver Disease (2010), genetic testing to detect mutations in the ENG, ALK-1, or MAHD4 genes may be helpful in selected cases. Patients suspected of having HHT should be screened for cerebral and pulmonary arteriovenous malformations (AVMs), and family members of the patient should consider genetic testing. The textbook Cassidy: Management of Genetic Syndromes (2005), reports that, to date, mutation testing has not been widely used in the diagnosis of HHT. However, mutations in either ALK1 or endoglin have been demonstrated in over 70% of unrelated, affected individuals tested using direct gene sequencing of genomic DNA. Genetic testing for HHT will have an important role in both the testing of individuals for whom the diagnosis is uncertain and in presymptomatic testing of young adults at risk of HHT. In 2006, Bossler and group describe the results of mutation analysis on a consecutive series of 200 individuals undergoing clinical genetic testing for HHT. The observed sensitivity of mutation detection was similar to that in other series with strict ascertainment criteria. A total of 127 probands were found, with sequence changes consisting of 103 unique alterations, 68 of which were novel. In addition, eight intragenic rearrangements in the ENG gene and two in the ACVRL1 gene were identified in a subset of coding sequence mutation-negative individuals. Most individuals tested could be categorized by the number of HHT diagnostic criteria present. Surprisingly, almost 50% of the cases with a single symptom were found to have a significant sequence alteration; three of these reported only nosebleeds. The authors concluded, “genetic testing can confirm the clinical diagnosis in individuals and identify presymptomatic mutation carriers. As many of the complications of HHT disease can be prevented, a confirmed molecular diagnosis provides an opportunity for early detection of AVMs and management of the disease.” Spinal Muscular Atrophy Spinal muscular atrophy (SMA), which has an estimated prevalence of 1 in 10,000, is characterized by proximal muscle weakness resulting from the degeneration of anterior horn cells in the spinal cord. SMA type I is typically diagnosed at birth or within the first 3 to 6 months of life; affected children are unable to sit unassisted and usually die from respiratory failure within 2 years. Those with SMA type II, which is diagnosed before 18 months of age, are unable to stand or walk unaided, although they may be able to sit and may survive beyond age 4. The clinical features of SMA types III and IV are milder and manifest after 18 months of age or in adulthood, respectively. SMA is inherited in an autosomal recessive manner and is caused by alterations in the survival motor neuron 1 (SMN1) gene located on chromosome 5 at band q12.2 to q13.3. Approximately 95% of SMA patients have the condition as a result of a homozygous deletion involving at least exon 7 of SMN1. Approximately 5% are compound heterozygotes, with a deletion in 1 allele of SMN1 and a subtle intragenic variation in the other. SMN2, a gene nearly identical in sequence to SMN1, is located in the same highly repetitive region on chromosome 5. Although it does not cause SMA, it has been shown to modify the phenotype of the condition; those with the milder SMA types II or III tend to have more copies of SMN2 than those with the severe type I. SMN1 deletions are detected by polymerase chain reaction (PCR) amplification of exon 7 of the SMN genes, followed by restriction fragment length polymorphism (RFLP) analysis. Following amplification, exon 7 of SMN2 will be cut with the restriction enzyme DraI, while exon 7 of SMN1 will remain intact. SMA patients with homozygous SMN1 deletions will show an absence of the uncut SMN1 exon 7 PCR products. To detect heterozygous SMN1 deletions in SMA carriers or compound heterozygotes, quantitative PCR (qPCR) is performed. To identify subtle intragenic variations in SMA patients found to have only 1 copy of the deletion, the SMN1 gene is typically sequenced. Candidates for diagnostic testing include infants, children, and adults with generalized hypotonia and proximal muscle weakness of unknown etiology. Carrier testing may be offered to couples considering pregnancy, including those with a family history of SMA, and prenatal diagnosis should be made available to all identified carriers. Genetic Test Panels for Nonsyndromic Hearing Loss There is limited published evidence for the clinical validity and clinical utility of specific genetic test panels for nonsyndromic hearing loss. A number of test panels are currently available commercially (e.g., OtoScope, OtoGenome, OtoSeq. The genes included in these test panels differ siginficantly, and there is limited published information on their clinical utlity and clinical validity. Hearing loss may be classified as either syndromic or nonsyndromic. Nonsyndromic hearing loss is defined by the absence of malformations of the external ear or other medical problems in the affected individual. With the syndromic hearing loss, malformations of the external ear and/or other medical problems are present. Approximately 50% of nonsyndromic hearing loss can be attributed to a genetic cause, and may be inherited in an autosomal recessive (70% of patients), autosomal dominant (20% of patients), with mitochondrial, X-linked and other genetic causes making up the remainder of patients. Sequence variants in approximately 60 genes and some micro-RNAs have been associated with causing nonsyndromic hearing loss. Micro-RNAs are post-transcriptional regulators that consist of 20-25 nucleotides. Usher and Pendred syndromes are the most common forms of the approximately 400 forms of syndromic hearing loss. Both have autosomal recessive inheritance. Usher syndrome is characterized by sensorineural hearing loss and later development of retinitis pigmentosa. Usher syndrome has three forms that vary by the profundity of hearing loss and whether vestibular dysfunction is present. The three types of Usher syndrome have been associated with sequence variants in 9 different genes. Pendred syndrome is characterized by congenital hearing loss and euthyroid goiter that develops in the second or third decade of life. Pendred syndrome is associated with sequence variants in the SLC26A4 gene. Some of the genes associated with Usher and Pendred syndromes may also be associated with nonsyndromic hearing loss. The OtoSCOPE test has been developed to make use of next generation sequencing capabilities, to simultaneously test for sequence variants in 66 genes associated with nonsyndromic hearing loss as well as both Usher and Pendred syndromes. The claimed advantage of the OtoScope test is that simultaneous analysis of the 66 genes included in the test may reduce the time and cost compared with genetic testing of individual genes. the OtoSCOPE genetic testing for hereditary hearing loss is considered investigational/experimental because there is inadequate evidence in the peer-reviewed published clinical literature regarding its effectiveness. Published evidence for the OtoSeq test panel includes an epidemiological study of the use of a component of the OtoSeq panel in identifying certain hearing loss genes in 34 Pakistani families (Shahzad, et al., 2013). In addition, there is a prelimary study of the performance of the OtoSeq in 8 individuals with hearing loss, comparing the results of Next Generation Sequencing with Sanger Sequencing (Sivakumaran, et al., 2013). There is insufficient published information about the performance and clinical utility of this test. Bedard et al (2011) noted that patients with heterotaxy have characteristic cardiovascular malformations, abnormal arrangement of their visceral organs, and midline patterning defects that result from abnormal left-right patterning during embryogenesis. Loss of function of the transcription factor ZIC3 causes X-linked heterotaxy and isolated congenital heart malformations and represents one of the few known monogenic causes of congenital heart disease. The birth incidence of heterotaxy-spectrum malformations is significantly higher in males, but the authors’ previous work indicated that mutations within ZIC3 did not account for the male over-representation. Therefore, cross species comparative sequence alignment was used to identify a putative novel fourth exon, and the existence of a novel alternatively spliced transcript was confirmed by amplification from murine embryonic RNA and subsequent sequencing. This transcript, termed Zic3-B, encompasses exons 1, 2, and 4 whereas Zic3-A encompasses exons 1, 2, and 3. The resulting protein isoforms are 466 and 456 amino acid residues respectively, sharing the first 407 residues. Importantly, the last 2 amino acids in the 5th zinc finger DNA binding domain are altered in the Zic3-B isoform, indicating a potential functional difference that was further evaluated by expression, subcellular localization, and transactivation analyses. The temporo-spatial expression pattern of Zic3-B overlaps with Zic3-A in-vivo, and both isoforms are localized to the nucleus in-vitro. Both isoforms can transcriptionally activate a Gli binding site reporter, but only ZIC3-A synergistically activates upon co-transfection with Gli3, suggesting that the isoforms are functionally distinct. The authors concluded that screening 109 familial and sporadic male heterotaxy cases did not identify pathogenic mutations in the newly identified fourth exon and larger studies are necessary to establish the importance of the novel isoform in human disease. Tariq et al (2011) heterotaxy-spectrum cardiovascular disorders are challenging for traditional genetic analyses because of clinical and genetic heterogeneity, variable expressivity, and non-penetrance. In this study, high-resolution single nucleotide polymorphisms (SNPs) genotyping and exon-targeted array comparative genomic hybridization (CGH) platforms were coupled to whole-exome sequencing to identify a novel disease candidate gene. SNP genotyping identified absence-of-heterozygosity regions in the heterotaxy proband on chromosomes 1, 4, 7, 13, 15, 18, consistent with parental consanguinity. Subsequently, whole-exome sequencing of the proband identified 26,065 coding variants, including 18 non-synonymous homozygous changes not present in dbSNP132 or 1000 Genomes. Of these 18, only 4 -- 1 each in CXCL2, SHROOM3, CTSO, RXFP1 -- were mapped to the absence-of-heterozygosity regions, each of which was flanked by more than 50 homozygous SNPs, confirming recessive segregation of mutant alleles. Sanger sequencing confirmed the SHROOM3 homozygous missense mutation and it was predicted as pathogenic by 4 bio-informatic tools. SHROOM3 has been identified as a central regulator of morphogenetic cell shape changes necessary for organogenesis and can physically bind ROCK2, a rho kinase protein required for left-right patterning. Screening 96 sporadic heterotaxy patients identified 4 additional patients with rare variants in SHROOM3. The authors concluded that using whole exome sequencing, the authors identify a recessive missense mutation in SHROOM3 associated with heterotaxy syndrome and identify rare variants in subsequent screening of a heterotaxy cohort, suggesting SHROOM3 as a novel target for the control of left-right patterning. This study revealed the value of SNP genotyping coupled with high-throughput sequencing for identification of high yield candidates for rare disorders with genetic and phenotypic heterogeneity. Also, UpToDate reviews on “Clinical manifestations, pathophysiology, and diagnosis of atrioventricular (AV) canal defects” (Fleishman and Tugertimur, 2013) and “Congenital heart disease (CHD) in the newborn: Presentation and screening for critical CHD” (Altman, 2013) do not mention the use of genetic testing as a management tool. Mitochondrial Recessive Ataxia Syndrome Lee et al (2007) stated that spino-cerebellar ataxia (SCA) is a heterogeneous group of neurodegenerative disorders with common features of adult-onset cerebellar ataxia. Many patients with clinically suspected SCA are subsequently diagnosed with common SCA gene mutations. Previous reports suggested some common mitochondrial DNA (mtDNA) point mutations and mitochondrial DNA polymerase gene (POLG1) mutations might be additional underlying genetic causes of cerebellar ataxia. These researchers tested whether mtDNA point mutations A3243G, A8344G, T8993G, and T8993C, or POLG1 mutations W748S and A467T are found in patients with adult-onset ataxia who did not have common SCA mutations. A total of 476 unrelated patients with suspected SCA underwent genetic testing for SCA 1, 2, 3, 6, 7, 8, 10, 12, 17, and DRPLA gene mutations. After excluding these SCA mutations and patients with paternal transmission history, 265 patients were tested for mtDNA mutations A3243G, A8344G, T8993G, T8993C, and POLG1 W748S and A467T mutations. No mtDNA A3243G, A8344G, T8993G, T8993C, or POLG1 W748S and A467T mutation was detected in any of the 265 ataxia patients, suggesting that the upper limit of the 95 % confidence interval (CI) for the prevalence of these mitochondrial mutations in Chinese patients with adult-onset non-SCA ataxia is no higher than 1.1 %. The authors concluded that the mtDNA mutations A3243G, A8344G, T8993G, T8993C, or POLG1 W748S and A467T are very rare causes of adult-onset ataxia in Taiwan. Routine screening for these mutations in ataxia patients with Chinese origin is of limited clinical value. Gramstad et al (2009) noted that mutations in the catalytic subunit of polymerase gamma (POLG1) produce a wide variety of neurological disorders including a progressive ataxic syndrome with epilepsy: mitochondrial SCA and epilepsy (MSCAE). The authors’ earlier studies of patients with this syndrome raised the possibility of more prominent right than left hemisphere dysfunction. To investigate this in more detail, 8 patients (6 women, 2 men; mean age of 22.3 years) were studied. All completed an intelligence test (Wechsler Adult Intelligence Scale; WAIS), and 4 were also given memory tests and a comprehensive neuropsychological test battery. Patients with MSCAE showed significant cognitive dysfunction. Mean Verbal IQ (84.3) was significantly better than Performance IQ (71.8) (t = 5.23, p = 0.001), but memory testing and neuropsychological testing failed to detect a consistent unilateral dysfunction. The authors concluded that further studies are needed to define the profile and development of cognitive symptoms in this disorder. Isohanni et al (2011) stated that mitochondrial DNA polymerase γ (POLG1) mutations in children often manifest as Alpers syndrome, whereas in adults, a common manifestation is mitochondrial recessive ataxia syndrome (MIRAS) with severe epilepsy. Because some patients with MIRAS have presented with ataxia or epilepsy already in childhood, these investigators searched for POLG1 mutations in neurologic manifestations in childhood. They investigated POLG1 in 136 children, all clinically suspected to have mitochondrial disease, with one or more of the following: ataxia, axonal neuropathy, severe epilepsy without known epilepsy syndrome, epileptic encephalopathy, encephalohepatopathy, or neuropathologically verified Alpers syndrome. A total of 7 patients had POLG1 mutations, and all of them had severe encephalopathy with intractable epilepsy. Four patients had died after exposure to sodium valproate. Brain MRI showed parieto-occipital or thalamic hyper-intense lesions, white matter abnormality, and atrophy. Muscle histology and mitochondrial biochemistry results were normal in all. The authors concluded that POLG1 analysis should belong to the first-line DNA diagnostic tests for children with an encephalitis-like presentation evolving into epileptic encephalopathy with liver involvement (Alpers syndrome), even if brain MRI and morphology, respiratory chain activities, and the amount of mitochondrial DNA in the skeletal muscle are normal. POLG1 analysis should precede valproate therapy in pediatric patients with a typical phenotype. However, POLG1 is not a common cause of isolated epilepsy or ataxia in childhood. Tang et al (2012) determined the prevalence of MNGIE-like phenotype in patients with recessive POLG1 mutations. Mutations in the POLG1 gene, which encodes for the catalytic subunit of the mitochondrial DNA polymerase gamma essential for mitochondrial DNA replication, cause a wide spectrum of mitochondrial disorders. Common phenotypes associated with POLG1 mutations include Alpers syndrome, ataxia-neuropathy syndrome, and progressive external ophthalmoplegia (PEO). Mitochondrial neuro-gastro-intestinal encephalomyopathy (MNGIE) is an autosomal recessive disorder characterized by severe gastrointestinal dysmotility, cachexia, PEO and/or ptosis, peripheral neuropathy, and leukoencephalopathy. MNGIE is caused by TYMP mutations. Rare cases of MNGIE-like phenotype have been linked to RRM2B mutations. Recently, POLG1 mutations were identified in a family with clinical features of MNGIE but no leukoencephalopathy. The coding regions and exon-intron boundaries of POLG1 were sequence analyzed in patients suspected of POLG1-related disorders. Clinical features of 92 unrelated patients with 2 pathogenic POLG1 alleles were carefully reviewed. Three patients, accounting for 3.3 % of all patients with 2 pathogenic POLG1 mutations, were found to have clinical features consistent with MNGIE but no leukoencephalopathy. Patient 1 carries p.W748S and p.R953C; patient 2 is homozygous for p.W748S, and patient 3 is homozygous for p.A467T. In addition, patient 2 has a similarly affected sibling with the same POLG1 genotype. POLG1 mutations may cause MNGIE-like syndrome, but the lack of leukoencephalopathy and the normal plasma thymidine favor POLG1 mutations as responsible molecular defect. Furthermore, UpToDate reviews on “Overview of the hereditary ataxias” (Opal and Zoghbi, 2013a) and “The spinocerebellar ataxias” (Opal and Zoghbi, 2013b) do not mention the use of POLG1 genetic testing. National Comprehensive Cancer Network’s clinical practice guideline on “Myelodysplastic syndromes” (2014) stated that further evaluations are necessary to establish the role of these genetic lesions on risk stratification systems in myelodysplastic syndrome. The guidelines stated that mutations in TET2 are among the most common mutations reported in patients with myelodysplastic syndromes (about 20 % of cases). Mutations in SF3B1 are one of several common molecular abnormalities involving the RNA splicing machinery, occurring in 14.5 to 16.0 % of MDS cases. A special report on “Exome sequencing for clinical diagnosis of patients with suspected genetic disorders” by the BCBSA’s Technology Evaluation Center (2013) stated that “Exome sequencing has the capacity to determine in a single assay an individual’s exomic variation profile, limited to most of the protein coding sequence of an individual (approximately 85 %), composed of about 20,000 genes, 180,000 exons (protein-coding segments of a gene), and constituting approximately 1 % of the whole genome. It is believed that the exome contains about 85 % of heritable disease-causing mutations …. Exome sequencing, relying on next-generation sequencing technologies, is not without challenges and limitations …. Detailed guidance from regulatory or professional organizations is under development, and the variability contributed by the different platforms and procedures used by clinical laboratories offering exome sequencing as a clinical service is unknown …. Currently, the diagnostic yield for single-gene disorders appears to be no greater than 50 % and possibly less, depending on the patient population and provider expertise. Medical management options may be available for only a subset of those diagnosed”. Strasser et al (2012) stated that Alport syndrome (ATS) is a type-IV collagen inherited disorder, caused by mutations in COL4A3 and COL4A4 (autosomal recessive) or COL4A5 (X-linked). Clinical symptoms include progressive renal disease, eye abnormalities and high-tone sensori-neural deafness. A renal histology very similar to ATS is observed in a subset of patients affected by mutations in MYH9, encoding non-muscle-myosin Type IIa -- a cytoskeletal contractile protein. MYH9-associated disorders (May-Hegglin anomaly, Epstein and Fechtner syndrome, and others) are inherited in an autosomal dominant manner and characterized by defects in different organs (including eyes, ears, kidneys and thrombocytes). These researchers described here a 6-year old girl with hematuria, proteinuria, and early sensori-neural hearing loss. The father of the patient is affected by ATS, the mother by isolated inner ear deafness. Genetic testing revealed a pathogenic mutation in COL4A5 (c.2605G>A) in the girl and her father and a heterozygous mutation in MYH9 (c.4952T>G) in the girl and her mother. The paternal COL4A5 mutation seems to account for the complete phenotype of ATS in the father and the maternal mutation in MYH9 for the inner ear deafness in the mother. It has been discussed that the interaction of both mutations could be responsible for both the unexpected severity of ATS symptoms and the very early onset of inner ear deafness in the girl. An UpToDate review on “Congenital and acquired disorders of platelet function” (Coutre, 2013) states that “Giant platelet disorders -- Inherited platelet disorders with giant platelets are quite rare (picture 2 and algorithm 1 and table 4). These include platelet glycoprotein abnormalities (e.g., Bernard-Soulier syndrome), deficiency of platelet alpha granules (e.g., gray platelet syndrome), the May-Hegglin anomaly, which also involves the presence of abnormal neutrophil inclusions (i.e., Dohle-like bodies), and some kindreds with type 2B von Willebrand disease (the Montreal platelet syndrome)”. This review does not mention the use of genetic testing as a management tool for giant platelet disorders. UpToDate reviews on “Inborn errors of metabolism: Epidemiology, pathogenesis, and clinical features” (Sutton, 2013a) and “Inborn errors of metabolism: Classification” (Sutton, 2013b) do not mention the use of genetic testing as a management tool. Very Long Chain AcylCoA Dehydrogenase Deficiency (VLCADD) An UpToDate review on “Newborn screening” (Sielski, 2013) states that “MS-MS [tandem mass spectrometry] detects more cases of inborn errors of metabolism than clinical diagnosis. In a study from New South Wales and the Australian Capital Territory, Australia, the prevalence of 31 inborn errors of metabolism affecting the urea cycle, amino acids (excluding PKU), organic acids, and fatty acid oxidation detected by MS-MS in 1998 to 2002 was 15.7 per 100,000 births, compared to 8.6 to 9.5 per 100,000 births in the four four-year cohorts preceding expanded screening. The increased rate of diagnosis was most apparent for the medium-chain and short-chain acyl-Co-A dehydrogenase deficiencies. Whether all children with disorders detected by MS-MS would have become symptomatic is uncertain …. The American Academy of Pediatrics has developed newborn screening fact sheets for 12 disorders, biotinidase deficiency, congenital adrenal hyperplasia, congenital hearing loss, congenital hypothyroidism, cystic fibrosis, galactosemia, homocystinuria, maple syrup urine disease, medium-chain acyl-coenzyme A dehydrogenase deficiency, PKU, sickle cell disease and other hemoglobinopathies, and tyrosinemia …. With the use of tandem mass spectrometry (MS-MS), the prevalence of a confirmed metabolic disorder detected by newborn screening is 1:4000 live births (about 12,500 diagnoses each year) in the United States. The most commonly diagnosed conditions are hearing loss, primary congenital hypothyroidism, cystic fibrosis, sickle cell disease, and medium-chain acyl-CoA dehydrogenase deficiency”. This review does not mention very long chain acylCoA dehydrogenase deficiency. Congenital Stationary Night Blindness According to Orphanet (a portal for rare diseases and orphan drugs), congenital stationary night blindness (CSNB) is an inherited retinal disorder that predominates on rods. It is a rare disease; and 3 types of transmission can be found: (i) autosomal dominant, (ii) recessive, and (iii) X-linked recessive. The affection is heterogeneous. The only symptom is hemeralopia with a moderate loss of visual acuity. Both the funduscopy and visual field are normal. In recessive forms, the “b” wave on the electroretinogram/electroretinography (ERG) is not found in the scotoscopic study, while the “a” wave is normal and increases with light intensity. In dominant forms, the “b” wave is seen. Levels of rhodopsine are normal and regenerate normally. Signal transmission may be the affected function. There is no specific treatment for CSNB. http://www.orpha.net/consor/cgi-bin/OC_Exp.php?lng=EN&Expert=215. According to Genetic Home Reference, X-linked CSNB is a disorder of the retina. People with this condition typically have difficulty seeing in low light (night blindness). They also have other vision problems, including reduced acuity, high myopia, nystagmus, and strabismus. Color vision is typically not affected by this disorder. The visual problems associated with this condition are congenital. They tend to remain stable (stationary) over time. Researchers have identified 2 major types of X-linked CSNB: (i) the complete form, and (ii) the incomplete form. The types have very similar signs and symptoms. However, everyone with the complete form has night blindness, while not all people with the incomplete form have night blindness. The types are distinguished by their genetic cause and by the results of ERG. http://ghr.nlm.nih.gov/condition/x-linked-congenital-stationary-night-blindness. In general, the diagnosis of X-linked CSNB can be made by ophthalmologic examination (including ERG) and family history consistent with X-linked inheritance (Boycott et al, 2012) http://www.ncbi.nlm.nih.gov/books/NBK1245/. According to a Medscape review on “The Genetics of Hereditary Retinopathies and Optic Neuropathies” (Iannaccone, 2005), CSNB can be inherited according to all Mendelian inheritance patterns; 2 X-linked and 2 autosomal dominant genes have been cloned. In all types of CSNB, night vision is congenitally but non-progressively impaired and the retinal examination is normal. Most CSNB patients also have congenital nystagmus as the presenting sign, which can create a differential diagnostic challenge with Leber congenital amaurosis. Typically, patients with complete X-linked CSNB are also moderate-to-high myopes. The X-linked CSNB forms, which are the most common ones, all share an electronegative electroretinogram response similar to that seen in X-linked retinoschisis, and are distinguished in CSNB type 1 (also known as complete CSNB) and CSNB type 2 (incomplete CSNB) based on additional electroretinogram features, a distinction that has been confirmed at the genetic level”. http://www.medscape.com/viewarticle/501761_6. Price et al (1988) reported that 7 of 8 patients presented initially or were followed for decreased acuity and nystagmus without complaints of night blindness. The diagnosis of CSNB was established with ERG and dark adaptation testing. They stated that careful electrodiagnostic testing is needed to provide accurate genetic counseling. Two patients showed pupillary constriction to darkness, which is a sign of retinal disease in young patients. Lorenz et al (1996) presented the clinical data of 2 families with X-linked incomplete CSNB previously undiagnosed; ERG recordings in both families were suggestive of CSNB. The ERG of the obligate carrier was normal. In an attempt to distinguish between the complete and the incomplete type, and to identify further carrier signs, scotopic perimetry and dark adaptation were performed in both affected males and carriers. Scotopic perimetry tested the rod-mediated visual pathway in its spatial distribution. In affected males with non-recordable ERGs, scotopic perimetry and dark adaptation disclosed residual rod function indicating an incomplete type. In carriers, there was a sensitivity loss at 600 nm, which may be a new carrier sign. The authors concluded that correct diagnosis of the different forms of CSNB together with the identification of carriers is important for (i) genetic counseling, and (ii) linkage studies to identify the gene(s) for CSNB. Kim et al (2012) evaluated the frequency of negative waveform ERGs in a tertiary referral center. All patients who had an ERG performed at the electrophysiology clinic at Emory University from January 1999 through March 2008 were included in the study. Patients with b-wave amplitude less than or equal to a-wave amplitude during the dark-adapted bright flash recording, in at least 1 eye, were identified as having a "negative ERG". Clinical information, such as age, gender, symptoms, best corrected visual acuity, and diagnoses were recorded for these patients when available. A total of 1,837 patients underwent ERG testing during the study period. Of those, 73 patients had a negative ERG, for a frequency of 4.0 %. Within the adult (greater than or equal to 18 years of age) and pediatric populations, the frequencies of a negative ERG were 2.5 and 7.2 %, respectively. Among the 73 cases, negative ERGs were more common among male than female patients, 6.7 % versus 1.8 % (p < 0.0001). Negative ERGs were most common among male children and least common among female adults, 9.6 % versus 1.1 %, respectively, (p < 0.0001). Overall in this group of patients, the most common diagnoses associated with a negative ERG were CSNB (n = 29) and X-linked retinoschisis (XLRS, n = 7). The authors concluded that the overall frequency of negative ERGs in this large retrospective review was 4.0 %. Negative ERGs were most common among male children and least common among female adults. Despite the growing number of new diagnoses associated with negative ERGs, CSNB, and XLRS appear to be the most likely diagnoses for a pediatric patient who presents with a negative ERG. It is also interesting to note that in a recently completed clinical trial (last verified June 2012) of “Treatment of Congenital Stationary Night Blindness with an Alga Containing High Dose of Beta Carotene”, the selection criteria for participants of this trial do not include genetic testing. They included the following: http://www.clinicaltrials.gov/ct2/show/NCT00569023. - Isolated rod response markedly reduced (less than 20 % of normal) after 20 mins dark adaptation and improved by 50 % after 2 hrs - Negative maximal response (“a” wave to “b” wave ratio less than 2) - Retinal mid-peripheral white dots (more than 3,000 dots). Amsterdam II criteria: At least 3 relatives must have an HNPCC-related cancer*, and all of the following criteria must be present: At least 1 of the relatives with cancer associated with HNPC should be diagnosed before age 50 years; and At least 2 successive generations must be affected; and FAP should be excluded in the colorectal cancer cases (if any); and One must be a 1st-degree relative of the other two; and Tumors should be verified whenerver possible. Revised Bethesda criteria: Member must meet 1 or more of the following criteria: Colorectal cancer is diagnosed in a member with 1 or more 1st-degree relatives with an HNPCC-related cancer*, with one of the cancers diagnosed under age 50 years; or Colorectal cancer is diagnosed in a member with 2 or more 1st- or 2nd-degree relatives with an HNPCC-related cancer*, regardless of age; or Member has colorectal cancer diagnosed before age 50 years; or Member has colorectal cancer with microsatellite instability-high (MSI-H) histology, where cancer is diagnosed before age 60 years; or Member has synchronous or metachronous HNPCC-related cancers*, regardless of age. * Hereditary nonpolyposis colorectal cancer (HNPCC)-related cancers include colorectal, endometrial, gastric, ovarian, pancreas, ureter and renal pelvis, brain (usually glioblastoma as seen in Turcot syndrome), and small intestinal cancers, as well as sebaceous gland adenomas and keratoacanthomas in Muir-Torre syndrome.
In the early years of the Nazi regime, Jews had sought refuge mainly in neighboring European countries, but also in Palestine and the United States. With the Nazis’ reach expanding and options for immigration diminishing, China increasingly turned into a destination for Jews seeking to escape. The SS Conte Verde was one of the steamers that brought refugees to Shanghai from the Italian ports of Genoa and Trieste. The voyage to China took one month and was quite costly – a challenge for German Jews whose financial situation had been severely eroded under the Nazis. After studies at the Academy of Art in Vienna, the printmaker Michel Fingesten had traveled extensively and ultimately settled in Germany. Neither the Austrian national’s Jewish descent nor his penchant for the erotic endeared him to the Nazis. The increasingly unbearable racial politics of the regime made him decide to stay in Italy after a family visit to Trieste in 1935. Fingesten is known mainly as an illustrator and as a prolific, imaginative designer of book plates. April 18, 1938 was his 54th birthday.
How it works Each year, Engineers Without Borders UK works with one of our partner organisations to produce a series of engineering challenges based on the real-world problems the partner’s community faces. These challenges span engineering disciplines including water and sanitation, energy, the built environment, transport, waste management, information communications technology and local industry. Students at participating universities are asked to design a potential solution to one of the challenges appropriate to the economic, social and environmental context in which it will be used. A mandatory part of the curriculum Participating institutions run the Engineering for People Design Challenge as part of the engineering curriculum for first and second year undergraduates, making it a mandatory part of the degree course. The initiative contributes to the Engineering Council requirements for students on accredited degrees to demonstrate understanding of the design process and have a broad awareness of the economic, legal, social, ethical and environmental context of engineering activity. To find out more, go to the EwB site - https://www.ewb-uk.org/engineering-for-people/
Read Specific Line from file Using Java ReadSpecific Line fromfile Using Java Here we are going to read a specific line from the textfile. For this we have created a for loop to read lines 1 to 10 from the textfile. If the loop reached fifth line, the br.readLine() method Read Lines from text file Read Lines fromtext file Here's a brief desc of what my Java code does .. I'm using BufferedReader to read lines from a text files and split each... readfrom the textfile and displays the output as desired. Unable to read the rest Delete specific lines from text in Java Delete specific lines fromtext in Java Hi, beginning in java, i'm trying to delete specific line from a textfile. Ex: i want to delete data... number that i want to delete. how could it be possible with java. thanks a lot read text file and store the data in mysql - JDBC readtextfile and store the data in mysql when we store the data in mysql table fromtextfile its store the datafrom new line to new column. how to store the data in different columnfrom a single line of textfile. Read from file java Readfromfile java How to Readfromfilejava? What is the best method for a textfile having a size of 10GB. Since i have to process the file one line at a time so tell me the very best method. Java read lines from file Javaread lines from file Any code example related to Javaread lines fromfile? In my project there is requirement of reading the file line by line... of reading file line by line in Java. Can any one share me the code for reading Java read text file Here is the video insturction "How to readtextfile in Java?"... a textfile one line at a time. It can also be used to read large text files... by other program. Here we have used DataInputStream class to read How to read text from - Java Beginners How to readtextfrom How to retrieve textfrom the images... Does we have any function to get text over the images Hi Friend, read excel file from Java - Java Beginners read excel filefrom Java How we read excel filedata with the help of java? Hi friend, For read more information on Java POI visit to : Retrieving specific data from excel Retrieving specificdatafrom excel Hello everyone, i have written a simple code to retrieve datafrom excel sheet and working fine, the excel file... to be printed. here is my sample code FileInputStream file = new How to read a large text file line by line in java? How to read a large textfile line by line in java? I have been assigned a work to read big textfile and extract the data and save into database... you kind advice and let's know how to read a large textfile line by line in java Read data from excel file and update database using jsp Readdatafrom excel file and update database using jsp readdatafrom excel file and update database using jsp Hi, I am using a MySQL database for scientific research analysis. My team members sends research data in excel file Java read file There are many ways to read a file in Java. DataInputStream class is used to readtextFile line by line. BufferedReader is also used to read a file in Java... BufferedReader class readtextfrom a character-input stream rather than read one
Speech and language impairments Some children do not develop speech and language as expected. They may experience difficulties with any or all aspects of speech and language – from moving the muscles which control speech to the ability to understand or use language at all. These difficulties can range from the mild to the severe and long-term. Sometimes these difficulties are unrelated to any other difficulty or disorder – they are therefore said to be specific language difficulties. Some children may have both a specific language difficulty and other disabilities. Education and participation in society depend upon the ability to communicate. It is vital that children with speech and language impairments are offered comprehensive help as early as possible. Different forms of speech and language impairment - speech apparatus – the mouth, tongue, nose, breathing and how they are co-ordinated and operated by muscles - phonology – the sounds that make up language - syntax or grammar – the way that words and parts of words combine in phrases and sentences - semantics – the meaning of sentences, words, and bits of words (semantic and pragmatic disorders) - pragmatics – how language is used in different situations and how feelings are conveyed (semantic and pragmatic disorders) - intonation and stress (prosody) – the rhythm and music of the way we speak Within these areas some children may have difficulties in understanding language (receptive difficulties), some in using language (expressive difficulties), and some in both understanding and using language. Afasic publishes Glossary Sheets which explain these speech and language difficulties in greater detail. How many children have speech and language impairments? 6 in 100 children will at some stage have a speech, language or communication difficulty. NHS Centre for Reviews and Dissemination: Pre-school hearing, speech, language and vision screening (Effective Health Care Volume 4 No 2, 1998) At least 1 in 500 children experiences severe, long-term difficulties. Articles of interest The Royal College of Speech and Language Therapists’ April Bulletin featured a couple of articles about SLI which may be of interest. RCSLT Bulletin article - Where are the boundaries? (482.3 KB, 837 hits) RCSLT Bulletin article - SLI: the invisible neuro-developmental disorder (115.1 KB, 1,105 hits) For more information about speech and language impairments, please contact us.
Talking of relativism, one of the most famous ― or infamous ― philosophers we may immediately think of is Protagoras, a noted figure in ancient Greece, and rival of Socrates. Protagoras claimed that whether a thing is good or bad depends on us; what, who, how, when, and where we are: relativism. There is no absolute truth; there are two sides to every question. When asked cold air is good or bad, he answered cold air is both good and bad; if we are a jogger and have just finished jogging, sweating, we will feel cold air relaxing and good. If we are sick and having chills, we will not think it good to be even in a breeze. Protagoras thought in a relativistic fashion in ordinary life, discussing things like cold air, while Gorgias, also one of the most noted, or notorious, sophists in ancient Greece, applied it to metaphysics, or rather, ontology. Gorgias’s logic is hard to comprehend but his conclusion is simple enough. As opposed to common sense which says that it cannot be the case that a thing both is and is not at the same time, with regard to existence, Gorgias claimed that neither the existent nor the non-existent exist, and both the existent and the non-existent do not, either; that is, what is isn’t, what isn’t isn’t, and what both is and isn’t, isn’t. These propositions go partly against those of Parmenides, who said that what is is, and what isn’t isn’t. These two representative sophists also advocated a kind of agnosticism. Protagoras claimed that, as to gods, we have no idea whether we know that they exist or not, and what they are like. We don’t know anything about them, a core concept of agnosticism. Gorgias addressed the Athenians, making a somewhat showy statement that nothing exists; that even if something exists, nothing can be known about it; that even if something can be known about it, knowledge about it cannot be communicated to others; and that even if it can be communicated, it cannot be understood. Well, if nothing exists, it is not necessary in the least to add the statement as to whether something is existent, knowable, or communicable. After all, Gorgias’s statement is flashy and excessive. Anyway, I think he tried to persuade us that nothing can be existent nor proved. Which is another expression of agnosticism. It seems that relativism is intrinsically akin to agnosticism. People say that Protagoras was a relativist, since he thought that a thing can both be and be not at the same time. If he just said thus, I admit he was a relativist. But he was more than that, as I explained before: He also considered that there are some things above and beyond our comprehension. Which suggests that Protagoras, plus Gorgias as well, were both relativists and agnostics, which is a form of what I’d like to call meta-relativism. Have you ever heard of an ambiguous image? A duck and a rabbit are different: A duck is a bird, a rabbit an animal; a duck has a wide beak, and a rabbit long ears. If we draw a duck, it cannot be a rabbit at the same time. Yet there surely are some cases when a line drawing shows two different things at a time, an ambiguous image. This is the rabbit-duck illusion, in which a rabbit or a duck can be seen. If we are trapped in the law of non-contradiction, that it is impossible for the same property to belong and not to belong, at the same time, to the same thing, and in the same respect, when we happen to find an ambiguity, we may be confused, at a loss for words. Away with it, I should say. The drawing can be both. If we find more than one in one drawing, our understanding will be richer; the richer the recognition, the more vivid the imagination; the more vivid the imagination, the freer will we be ― we have a wider range of choices before our eyes ― and, the freer, the abler; the abler we are, the fitter will we be for our environment; and the fitter, the happier. Han Feize, a political philosopher in ancient China, was a good allegorist. One of his famous stories, very logically convincing us not to take two inconsistent sides at the same time, goes like this: A man was trying to sell a spear and a shield, saying that his spear could pierce any shield, and that his shield could defend from all spear attacks. Then one person asked what if he were to take his spear to strike his shield, to which the seller couldn’t answer. So we now know that this story of Han Feizi is the embodiment of the law of non-contradiction: It is impossible that a spear can and cannot penetrate, or a shield can and cannot defend, at the same time, a logical but meager world! Han Feizi was right and rigid but poor, poor in the sense that when he has two things contradicting each other, he just chooses one and dismisses the other. This is as if we only used knife and left fork behind to eat, thinking that knife is to cut and fork not to cut and they contradict and cannot be used at the same time. True that in theory it may be inappropriate and impossible to be incompatible, but in reality it often happens that we need (or already have) both. People say diversity is good and worth keeping, and yet people don’t like it if things are so diverse as to contradict themselves. However, it is in this discrepancy that the real diversity lies. Yajnavalkya, a Hindu Vedic thinker in ancient India, believed that Brahman, something great and supreme lying in us and at the same time prevailing throughout the universe, both is and is not, neither is nor is not, and is inconceivable. Brahman is transcendent; It is imperceptible, undecaying, unattached, unfettered. It is above and beyond us. Since It is above our comprehension, we cannot say affirmatively that It is this or It is that. We just can say, “Neti, neti,” which can be translated as “Neither this, nor that,” and which goes against the proposition of Protagoras, that it is both this and that. Brahman is also all-embracing; It is identified with the intellect and the mind; with the eyes and ears; with earth, water, and air; with fire and what is other than fire; with desire and with absence of desire; with anger and with absence of anger; with justice and injustice; and with this and with that. Brahman encompasses all; It is made up of all and It can be identified with all; we can say It is made up of both this and not this, and identified with both this and not this. Thinking along these lines, we can conclude that Brahman is both neither and both of the contradictory two, since Yajnavalkya declared that Brahman is not this nor that, and that Brahman is both this and that, a paradoxical yet relativist approach. Relativism is paradoxical; Protagoras claimed that cold air is both good and bad, depending on us. Not only did Yajnavalkya say that Brahman neither is nor is not, but he also told us that Brahman both is and is not. If Yajnavalkya just said that Brahman neither is nor is not, or, Brahman both is and is not, he is a relativist in an ordinary sense; since he announced that It neither is nor is not, as well as It both is and is not, he is more than just relativist; this is another meta-relativism. Here I have one more thing to add. Yajnavalkya seems to have been agnostic as much as he was relativist. He thought his cognizance of Brahman was so subtle, complex and far-reaching that he could not avoid being a nescient. At one time he forbade an inquirer of Brahman any more questions lest the inquirer’s head should fall off. A meta-relativist he was, I should repeat, indeed. Tolerance is a virtue. If we would like to be tolerant, we have to be patient with contradicting opinions. It is said that in Jainism, an ancient Indian religion, Jain logic is to beat any opponent in discussion, but it seems to me that Jain way of thinking also enables us to be magnanimous. This is because the logic is tolerant enough to allow for a wide variety of viewpoints. Why tolerance? Because the world we live in is multifaceted, so much so that we cannot catch or get along with it with only one fixed stand in mind. Jain logic, via “perhaps” or “from a point of view” or “in some ways,” attempts to keep up with this world of complexity by taking into consideration all possible cases, tolerance. The logic is comprised of seven statements: 1. In some ways, it is existent. 2. In some ways, it is non-existent. 3. In some ways, it is both existent and non-existent. 4. In some ways, it is indescribable. 5. In some ways, it is existent and indescribable. 6. In some ways, it is non-existent and indescribable. 7. In some ways, it is both existent and non-existent, and indescribable. A thing cannot be existent absolutely or unconditionally. If it is, it should be no matter how, when, where, it is. However, if we had a thing without any condition, we couldn’t obtain or throw away or avoid it: We couldn’t gain it if we already kept it, we couldn’t discard it if we everlastingly held it, and we couldn’t elude it if we had it unconditionally. But in reality, we always get or dump or evade a thing, which means that it is impossible for a thing to be absolutely existent. According to Jain logic, concerning a pot, we cannot say there is a pot; the word pot connotes its existence, so if we utter the word pot, it means there is a pot, and it follows from this connotation that when we say there is a pot, it suggests that the existence of pot exists, which is illogical, a tautology. We cannot say there isn’t a pot, either. Since the word pot premises its existence, when we say there isn’t a pot, it entails its existence being not existent, which is nonsense, a contradiction. Which is why we should say that a pot is existent in some sense. The Jain seven statements are nowhere near clarity, but they concern both relativism and agnosticism, yet another meta-relativism. Take the propositions of Protagoras and Yajnavalkya as an example, to explain Jain relativism. In everyday life Protagoras was relativist, that cold air is good for a jogger and bad for a sick person; on deities he was agnostic, that we have no idea about gods, whether they are existent or not. When Yajnavalkya dealt with deity, he was sometimes a relativist, and at other times he was an agnostic, that Brahman both is and is not, and at the same time Brahman neither is or is not, and that we cannot make inquiries beyond some point. If a Jain were to know their theory of relativism, she would put on a superior smile and try first to convince Protagoras that in the Jain viewpoint, when he said that cold air is good as well as injurious, and that gods are above our comprehension, it would be paraphrased as the statement that it both is and is not good, and is indescribable in some ways. And then she, in turn, would make an attempt to persuade Yajnavalkya, who said that Brahman is not this nor that, that Brahman consists of both this and that, and that Brahman cannot be thought of above a certain point, the last of which can be construed as the impossibility of the complete knowing of Brahman. A Jain would restate it, saying that in some point, it both is and is not, and indescribable. Here in part 1, I’ve been dealing so far with relativism, agnosticism, and what I call meta-relativism, concerning thinkers mainly in ancient Greece and India. Jainism may be a brief summary of meta-relativism.
Scientists, researchers and military organizations step up observations of the sun ahead of the solar max season. A predicted increase in the number and intensity of solar storms is forecast for 2013, and solar weather experts are advising both the public and private sector to make preparations. At the same time, however, even the most knowledgeable experts in the field of space weather say that because they still have so much to learn about the science behind solar storms, it is difficult to forecast them accurately. Compounding the situation is the fact that currently it is impossible to provide advice on how to prepare for the arrival of a so-called solar max year. Solar storms have been occurring at roughly 11-year intervals for most of the history of the solar system. But it has been only within the last century that extreme instances of electromagnetic energy from solar events have adversely affected a planet on which humans are more dependent on electronic technology than in the past. Joe Kunches, a space scientist with the National Oceanic and Atmospheric Administration’s (NOAA’s) Space Weather Prediction Center (SWPC) in Boulder, Colorado, is one of the nation’s top experts on geomagnetic disturbances, or GMDs, as they are referred to in the scientific community. Kunches says that in the absence of more hard scientific knowledge about solar storms, scientists know that recent history at least provides a statistical means for predicting when solar max years can be expected. “The sun has had this behavior for a long, long time; it’s a cyclical behavior,” Kunches explains. The real goal, he adds, is to know in advance how strong the storms are going to be, when they are going to take place and how many might occur. But he warns that all of those goals are beyond the realm of current capabilities. One of the first instances of Earth-bound human technology being affected by a solar flare took place during late summer in 1859. Some scientists have dubbed this a “solar superstorm” because of its intensity and also because of the existence of other conditions that exacerbated the effects of GMDs. In August that year, one of the largest coronal mass ejections, or CMEs, ever recorded traveled at a high rate of speed toward Earth and, as one of its first noticeable effects, caused telegraph systems to fail. Telegraph operators experienced electrical shocks, and some systems continued to appear to send messages even after their damaged power systems were disconnected. As recently as 1989, a solar storm had a similar, more widespread effect. On March 10, 1989, scientists observed a massive explosion on the surface of the sun. Two days later, electrical power was knocked out in the Canadian province of Quebec. Outages also took place in New York and much of New England, but U.S.-based electrical systems were able to acquire power from neighboring states and regions to compensate. Quebec, on the other hand, was dark for more than 12 hours as officials struggled to restore the system. It was later learned that the solar storm created strong ground-based electrical currents that had affected electrical grid equipment in Quebec and as far south in the United States as Ohio. Since then, power officials in Quebec have installed new equipment on distribution grids that they hope will make the grid less vulnerable to electromagnetic extremes. Today, Kunches says, scientists generally know that there are certain characteristics about the sun, and the behavior of its magnetic fields, that are reliable predictors of a solar max season. “People have studied how magnetic flux makes its way from the core of the sun to the surface, resulting in an eruption, and then it becomes a matter of how unstable these fields become,” he asserts. Kunches notes that his organization currently is tracking some extremely strong magnetic fields on the surface of the sun, but these fields are in what he describes as a quiet state and so far have not resulted in solar flares or CMEs. He explains that solar weather forecasters take all the available data to try to determine, based on past experience, how these magnetic fields will evolve, and, if they turn into solar flares or CMEs, how strong a solar storm they might become. One branch of the U.S. Defense Department that also is watching the way the solar winds blow during the upcoming solar max season is the Air Force. Col. Dan Edwards, USAF, is chief of Integration, Plans and Requirements, Directorate of Weather, Headquarters, U.S. Air Force, in Washington, D.C. He says that monitoring solar storms is a part of the Air Force Space Command’s mission known as Space Situational Awareness. “We operate in space. We have satellites that operate in space, and so we need to know the environment,” Col. Edwards explains. The Air Force has a number of concerns when it comes to solar storms, according to Lt. Col. Brad Green, USAF, chief, Space Operations Plans with the Air Force Directorate of Weather: • Geolocation errors in systems that rely on the Global Positioning System (GPS), including land navigation, weapons systems, positioning and timing systems. This is a special concern for GPS, as its orbital array of satellites are vulnerable to spikes of ground current that can result from GMDs interacting with the Earth’s magnetosphere. • Radar interference caused by spikes in ambient radiation because of GMDs. • Launch trajectory errors from electromagnetic interference. • Damage to Air Force and civilian communications satellites in low earth orbit because of electromagnetic interference. • Deterioration of satellite orbits from increased atmospheric drag resulting from changes in the upper atmosphere caused by severe GMDs. Col. Edwards says his branch also is concerned about the health effects of high levels of radiation on Air Force pilots traveling at high altitudes over the North Pole in aircraft such as the U-2. Because of the tilt of the Earth, the North Pole is more vulnerable to solar flares and CMEs than other parts of the planet, depending on the time of year the solar storm occurs. “Overall, the effects of solar weather are very complex,” Col. Edwards says, underscoring the need to continue to monitor the sun during the coming solar max year. But, like Kunches, Col. Edwards is quick to acknowledge that while he and his colleagues have learned a lot over the last few years, including obtaining a better understanding of the basic science behind solar storms, “We still have a long way to go.” The SWPC where Kunches works, and the Air Force Weather Office, are two of the U.S. organizations that track solar activity on a round-the-clock basis. They share what they learn with a wide coalition of stakeholders in the scientific, government, military and academic communities. “We take data from satellites, ground-based observatories, pretty much everywhere we can get it around the world, and we produce alerts, warnings and watches on space weather as they occur,” Kunches explains. Kunches notes that agencies normally associated with dealing with emergencies first-hand, such as the Federal Emergency Management Agency (FEMA) and its parent organization, the Department of Homeland Security (DHS), receive constant updates from the SWPC. The key, he says, is putting the word out as quickly as possible. Speeding the flow of information on impending solar storms is one of the goals of a recently announced interagency, cross-discipline initiative of which the SWPC is a part. What is a Solar Storm? In the process of trying to compensate and return to its natural state, he asserts, the magnetosphere creates strong currents of electrical energy that are capable of disrupting power grids and other electrical transmission equipment. The Unified National Space Weather Capability (UNSWC) was unveiled in June during the Space Weather Enterprise Forum, an annual conference of space weather experts held in Washington, D.C. The UNSWC, organized through the White House Office of the Federal Coordinator for Meteorological Services, includes NOAA and its parent agency, the Department of Commerce; FEMA/DHS; the departments of Defense, Energy and State; the U.S. Geological Survey and its parent, the Department of the Interior; the Federal Aviation Administration and its parent, the Department of Transportation; the National Science Foundation; and NASA. Solar storms begin deep below the surface of the sun, explains Joe Kunches, a space scientist with the National Oceanic and Atmospheric Administration’s (NOAA’s) Space Weather Prediction Center in Boulder, Colorado. Kunches says that between this year and 2014, the sun will develop strong magnetic fields that will well up within its surface. Periodically, these fields become twisted, and they erupt, producing solar flares and coronal mass ejections (CMEs). A CME is the release of large amounts of electrically charged plasma that are part of the sun’s nuclear reaction. "The Earth has its own magnetic field, which normally protects us from the charged particles of the sun, while the atmosphere protects us from some of the radiation. When the sun erupts, it actually energizes the rings of the Earth’s magnetic field, and it becomes disturbed, and rings like a bell," Kunches explains. What is a Solar Storm? In the process of trying to compensate and return to its natural state, he asserts, the magnetosphere creates strong currents of electrical energy that are capable of disrupting power grids and other electrical transmission equipment. In the short term, the UNSWC’s goal is to focus the resources of federal agencies in the area of solar storm research and observation when it comes to the upcoming solar max year in 2013, with an eye toward supporting scientific research and observation of the sun through a National Space Weather Portal. The UNSWC also will attempt to educate the U.S. public on how to prepare for the possibility of extended power outages in the event of a solar storm. Finally, the UNSWC also is expected to help coordinate science and forecasting efforts with international counterparts, including the United Kingdom’s Meteorological Office, the Korea Radio Research Agency (the South Korean government entity tasked with the same spectrum regulatory functions as the U.S. Federal Communications Commission) and the World Meteorological Organization. Col. Edwards says that in terms of collaborating with the global solar weather community in monitoring the sun, the Air Force has a number of global sensors that provide geomagnetic data and other information that contribute to the research and forecasting effort. Organizations that operate the private sector, as well as commercial electric power grids that serve most homes in the United States, also are taking steps to prepare for the solar max year. Mark Lauby is vice president and director of reliability assessment and performance analysis with the North American Electric Reliability Corporation (NERC) in Washington, D.C. NERC is one of several industry groups representing private-sector electrical transmission grid operators. Lauby says that the biggest challenge to the grid is voltage collapse, or the failure of electrical transformers because of spikes in electromagnetic energy from the solar storm. Lauby also says that grid operators need better simulation software, data and information to measure the spikes in ground current that result from solar storms, and the effect that those spikes have on power lines and other grid components. Another area of concern to the grid industry is the possible failure of older electrical transformers, such as the ones that failed in Quebec in 1989. “Some transformers of an older design are more vulnerable to damage than others,” Lauby explains, adding that NERC and other industry groups have embarked on a program to identify vulnerabilities to the effects of the GMDs. Those plans were outlined recently during the Federal Energy Regulatory Commission’s Technical Conference on Geomagnetic Disturbances to the Bulk-Power System. With the space- and ground-based resources available, the SWPC is able to provide roughly an hour’s notice of a possible severe solar storm, Kunches says. Systems and programs now under development hopefully will improve upon the time and accuracy of future eruptions. As for preparing for the solar max season ahead, Kunches says that given the current level of scientific understanding, and with electrical grids in varying states of vulnerability for the onset of a solar storm, the best planning that most people can do right now is to be prepared for the possibility of no electrical power for an extended period of time. Col. Edwards says that his advice to the Air Force chief of staff, to whom he reports, also is to be prepared for the possibility of an extended loss of electrical power at Air Force facilities in the event of an extreme geomagnetic disturbance. Lauby says NERC and other private electricity grid operators also are taking steps to minimize service disruptions from solar-storm related service outages. NOAA Space Weather Prediction Center: www.swpc.noaa.gov U.S. Air Force Weather Observer: www.afweather.af.mil National Space Weather Program/Unified National Space Weather Portal: www.swpc.noaa.gov/portal North American Electric Reliability Corporation (NERC): www.nerc.com NERC Comments to the Federal Energy Regulatory Commission Technical Conference on Geomagnetic Disturbances to the Bulk-Power System: http://bit.ly/LGtbE4
Question of Palestine home 30 October 1956 LETTER DATED 29 OCTOBER 1956 FROM THE REPRESENTATIVE OF THE UNITED STATES OF AMERICA, ADDRESSED TO THE PRESIDENT OF THE SECURITY COUNCIL, CONCERNING: “THE PALESTINE QUESTION: STEPS FOR THE IMMEDIATE CESSATION OF THE MILITARY ACTION OF ISRAEL IN EGYPT” United States of America: draft resolution The Security Council that the armed forces of Israel have penetrated deeply into Egyptian territory in violation of the armistice agreement between Egypt and Israel, its grave concern at this violation of the armistice agreement, Israel immediately to withdraw its armed forces behind the established armistice lines; (a) to refrain from the use of force or threat of force in the area in any manner inconsistent with the Purposes of the United Nations; (b) to assist the United Nations in ensuring the integrity of the armistice agreements; (c) to refrain from giving any military, economic or financial assistance to Israel so long as it as not complied with this resolution; 3. Requests the Secretary-General to keep the Security Council informed on compliance with this resolution and to make whatever recommendations he deems appropriate for the metenance of international peace and security in the area by the implementation of this and prior resolutions. The Security Council on 30 October 1956 voted on the draft resolution as follows: China, Cuba, Iran, Peru, Union of Soviet Socialist Republics, United States of America, and Yugoslavia. France, United Kingdom of Great Britain and Northern Ireland. Australia and Belgium.
In business, there are few professions more important than accounting. Accountants help managers determine how much profit they are making, how much they need to pay in taxes and how much they owe their creditors. As an academic subject, accounting is heavily grounded in mathematics and is closely related to finance. Academic accounting is much more conceptual than "real world" accounting and the types of topics that are preferred for master's research in accounting reflect this. Accounting standards govern the technical work of accountants. In America, accounting standards are governed and codified by the Financial Accounting Standards Board. An example of an accounting standard is "total assets must equal total liability." A master's thesis on accounting standards should deal with some aspect of international accounting standards, national accounting standards or accounting regulation. For example, a thesis on accounting regulation could evaluate aspects of the 2002 Sarbanes-Oxley act and whether its instruments are powerful enough to prevent accounting fraud. In accounting, ethics is sometimes a hot topic. As much as we might like to think that corporate fraud is the result of the odd "bad apple" executive, the truth is often that less-than-honest accountants act as enablers of dishonest executives. A thesis on accounting ethics could look at any issue related to ethics in the accounting profession, such as corporate accounting, independence of auditors and the relationship between accounting and banking. As an example, a thesis on auditors could examine whether expense account auditors in state legislatures are truly independent from political influence by examining the conduct of auditors in various high-profile scandals. Accounting and Finance There is a close relationship between finance and accounting. Financiers are interested in the world of accountants, because the accountant's profitability ratios help financier's analyze investments. Accountants are interested in issues of finance, because the basic subject matter of accounting -- assets and liabilities -- is financial in nature. Topics for financial accounting theses include financial ratios and accounting in corporate finance. A thesis on financial ratios, for example, could investigate the mathematical structure behind various financial ratios, such as price-earnings ratios, and whether these ratios are as sound as simple profitability ratios in accounting. The Accounting Profession The accounting profession is a prestigious one but is not without its issues. Complaints are often made that accountants are either too regulated or too unregulated, and complaints about gender bias in accounting are not rare. A master's thesis about accounting reform could cover educational, business or social topics. For example, a thesis on accounting education could evaluate whether accounting curricula at various schools gives students an idea of what kind of work accountants do in the real world. Accounting course schedule sheets could be compared with accountant work schedules. These materials would be obtained from accountants and accounting professors.
Not since the end of the Roman Empire, almost fifteen hundred years earlier, is there a parallel, in Europe at least, to the fall of the German nation in 1945. Industrious and inventive, home over centuries to a disproportionate number of western civilization's greatest thinkers, writers, scientists and musicians, Germany had entered the twentieth century united, prosperous, and strong, admired by almost all humanity for its remarkable achievements. During the 1930s, embittered by one lost war and then scarred by mass unemployment, Germany embraced the dark cult of National Socialism. Within less than a generation, its great cities lay in ruins and its shattered industries and its cultural heritage seemed utterly beyond saving. The Germans themselves had come to be regarded as evil monsters. After six years of warfare how were the exhausted victors to handle the end of a horror that to most people seemed without precedent? In Exorcising Hitler, Frederick Taylor tells the story of Germany's year zero and what came after. As he describes the final Allied campaign, the hunting down of the Nazi resistance, the vast displacement of peoples in central and eastern Europe, the attitudes of the conquerors, the competition between Soviet Russia and the West, the hunger and near starvation of a once proud people, the initially naive attempt at expunging Nazism from all aspects of German life and the later more pragmatic approach, we begin to understand that despite almost total destruction, a combination of conservatism, enterprise and pragmatism in relation to former Nazis enabled the economic miracle of the 1950s. And we see how it was only when the '60s generation (the children of the Nazi era) began to question their parents with increasing violence that Germany began to awake from its sleep cure'.
Turkey’s seas are under threat more than ever On World Biodiversity Day on May 22, I drew attention to nitrogen pollution in my article as a major threat to the environment. Coincidentally after this article, images of ‘sea snot’ in the Marmara Sea started to take place in Turkey’s environmental agenda. Experts warned in the news that this incident was a result of global warming. The news took place not only in Turkey but also in the international press. Turkey’s seas are not only places where we have a romantic sunset by eating seafood, but also very important ecosystems both economically and ecologically. Sea snot news shows that we do not appreciate the value of our seas. How? Sea surface temperatures are rising all over the world. United States National Oceanic and Atmospheric Administration data show that the average global sea surface temperature has increased by about 0.13 Centigrade degrees in each decade of the past 100 years. Marine biodiversity is under threat due to temperature increases. Increasing temperatures also affect the pollution in the seas. Especially when domestic wastes reach the sea, the abundance of phytoplankton rises with temperature increases. Phytoplanktons, single-celled organisms that perform photosynthesis, are important for marine biodiversity and the Earth, but their uncontrolled population increase threatens aquatic life. Disruption of the nitrogen cycle The main driver of phytoplankton increase is waste that contains nitrogen and phosphorus. The number of phytoplankton can be out of control when wastes containing them are commonly found in seawater. These wastes have been polluting the Marmara Sea for a long time. The Marmara Sea, where the wastewater of about 20 million people reaches, is already under threat with phytoplankton explosions. In addition, the Marmara Sea can be defined as an ecosystem that feeds on the nutrient-rich Black Sea. In other words, man-made wastes can cause phytoplankton to get out of control in the Marmara Sea with the effect of seawater from the Black Sea. Phytoplankton numbers are getting out of control, causing the seas to become an oxygen-poor ecosystem. The result is that the mucus-like images we see these days are formed over the seas. This mucus-like substance is not harmful alone; experts say this substance is a combination of protein, carbohydrate, and fat. However, this substance that is formed causes pollution that strangles marine life by attracting many microorganisms, including E. coli. Biodiversity is irreversibly threatened Turkish scientists working on corals observed that these habitats were completely covered with sea snot. For corals, sea snot is considered a significant threat because it causes the rapid death of corals, causing the sea bottoms to become desertified. Sea snot undoubtedly affects invertebrates, one of the biological elements of the Marmara Sea, negatively. Nowadays, the mucus layer that reaches the coastline has started to threaten the breeding grounds of fishes. It seems that the pressure of global warming and, most importantly, human population growth will bring negative effects on the economy. The fishing industry especially may experience difficulties. It is now a necessity to reduce the wastewater pressure in the Marmara Sea in order not to create a new economic burden on Turkey and, most importantly, not to damage the unique biodiversity that we have. The point I need to underline once again is that we should pay attention to nitrogen pollution, which increases the negative effects of global warming. Of course, let’s not put aside the uncontrolled acceleration in human population growth.
Aristotle was born at Stageira, in the dominion of the kings of Macedonia, in 384 BC. For twenty years he studied at Athens in the Academy of Plato, on whose death in 347 he left, and, some time later, became tutor of the young Alexander the Great. When Alexander succeeded to the throne of Macedonia in 335, Aristotle returned to Athens and established his school and research institute, the Lyceum, to which his great erudition attracted a large number of scholars. After Alexander's death in 323, anti-Macedonian feeling drove Aristotle out of Athens, and he fled to Chalcis in Euboea, where he died in 322. His writings, which were of extraordinary range, profoundly affected the whole course of ancient and medieval philosophy, and they are still eagerly studied and debated by philosophers today. Very many of them have survived and among the most famous are the Ethics and the Politics. It is quite interesting in that respect, but in the end it feels stylistically wrong. In my experience, one of three publishers who consistently offer outstanding translations of classic philosophical and literary texts is Penguin Classics. This is one of the great books which it is not easy, and not particularly fun to read. I purchased this book for my son and he is very happy with it. It is well written and affordable. No complaints.Published 6 months ago by Demetrice Moore I love the book. This book was in great condition. All the pages were so neat; no written, nothing as it was described.Published on December 11, 2011 by Matthew I preface my remarks with two disclaimers. (1) I would not presume to "review" Aristotle, but I can superficially review this edition. (2) I do not know Greek. Read morePublished on October 14, 2010 by Richard P. Cember Aristotle in any form is a tough read. However, this edition proves to be quite user friendly with the language and does not have commentaries that would interfere or try to... Read morePublished on July 4, 2010 by Anabel Montalbano Aristotle my friend. I wish; I had known you. I would have been your friend. Your treatises are works of pure genius, and most relevant and age-less. Read more Aristotle complements Plato as the second great philosophical master of the ancient world. Called simply 'The Philosopher' by many medievals because he exemplified philosophy,... Read morePublished on October 14, 2006 by Greg This extraordinary text totally paved the way for the rest of Western metaphysics. It is a lucid text, though still difficult because of the complexity of the ideas. Read morePublished on June 11, 2006 by Steiner
Explores what it means to be undocumented in a legal, social, economic and historical context In this illuminating work, immigrant rights activist Aviva Chomsky shows how “illegality” and “undocumentedness” are concepts that were created to exclude and exploit. With a focus on US policy, she probes how people, especially Mexican and Central Americans, have been assigned this status—and to what ends. Blending history with human drama, Chomsky explores what it means to be undocumented in a legal, social, economic, and historical context. The result is a powerful testament of the complex, contradictory, and ever-shifting nature of status in America. Activist and Salem State University historian Chomsky (They Take Our Jobs! And 20 Other Myths About Immigration) addresses the history and practice of U.S. immigration law in this part polemical, part historical account. The fact that "there was no national immigration system or agency in the United States" until 1890 may surprise many readers; and that "t's illegal to cross the border without inspection and/or without approval from U.S. immigration authorities" sounds straightforward, but Chomsky reveals how "dizzying" and "irrational" it is in practice. She reviews the myriad legislations, such as the Immigration Acts of 1924, 1965, and 1990, as well as immigrants' consequent entanglements and diverse experiences, ranging from the risks in getting into the U.S. to the perils of being there (including detentions, deportations, family separation, poor work conditions). Committed to the cause of the undocumented, and focused particularly on Mexican and Guatemalan immigrants, Chomsky reminds readers that, contrary to the freedom with which American citizens travel, for many, "freedom to travel is a distant dream." Professional in her scholarship, Chomsky has written a book that will be relevant to those who do not share her position as well as to those who do. Disappointingly, the final chapter, "Solutions," offers more of a review of how immigration became illegal than suggested solutions.
Source: Fall 2004 CCCF Newsletter Over the past 20 years, we have learned a great deal about how children treated for cancer learn and perform in school. Perhaps the most important thing we have learned is that there are patterns of abilities that occur after successful treatment that are different from those seen with more common learning problems that are often call “learning disabilities.” This difference is important, since the understanding of learning, and the approach to evaluating learning in children treated for cancer, needs to be different as well. For this reason, some of the standard approaches to testing children with cancer that occur in the school and other private settings may not be adequate for identifying problems and planning ways to help. Common Approaches to Evaluation of Learning Problems In the public schools, and in many private settings, a child’s ability to obtain special education services for learning problems is based on his or her performance on a group of tests of intellectual ability and academic achievement. Commonly, children are administered a standardized IQ test, along with tests of school-based skills, and often a measure of adaptive function. IQ tests are individually administered to the child and are made up of a number of subtests of specific abilities. A child’s score is based on his/her performance compared to other children of the same age. Three scores from an IQ test are considered: - A verbal IQ score, which is an index of 5 to 6 different tests of verbal or language-based skills. - A Performance IQ score, which is an index of 5 to 6 tests of visual, motor, speed, and perceptual abilities, and - A Full Scale IQ score, which is a combination of all the tests included in the Verbal and Performance IQ scores. Academic Achievement Tests The academic achievement tests are also individually administered, and may involve tests of Reading (word recognition and comprehension), Arithmetic (math calculations and applied math abilities), Spelling, and Writing. The child’s scores on these tests are based on his or her performance compared to other children of the same age or in the same grade. Tests of Adaptive Function Tests of adaptive function are usually administered in the form of an interview with the parent or caregiver. These tests typically provide information about a child’s communication, daily living, socialization, and motor skills compared to other children the same age, as observed by the parent or primary caregiver. Use of Tests for Classification Determination of a learning problem is usually based on several criteria. Children may be classified because their scores fall within a certain range, or because there are discrepancies between test scores in identified areas. Some of these classifications include: Impaired Cognitive Abilities (Mental Retardation): Children who score below 70 on an IQ test, have academic achievement scores in this same range, and have scores on a standardized measure of adaptive function (e.g., a parent report of the child’s communication, daily living, socialization, and motor skills) that are also below 70 may meet criteria for diagnosis of impaired cognitive abilities or mental retardation. Specific Learning Disability. Children who score above 70 on an IQ test, but have a significant discrepancy (usually more than 15 points) between their IQ score and measures of academic achievement (Reading or Math) may qualify for services for the learning disabled. Usually some indication of a processing deficit (e.g., visual-motor or auditory processing) is also necessary for this classification. Varying Exceptionalities. Children whose test scores indicate learning problems, but who also have other sensory deficits (e.g, hearing, vision), physical movement problems, or speech difficulties, may be classified in the category of varying exceptionalities, and may receive a range of both educational and therapeutic services. Other Health Impaired. Children who experience problems in school that can be attributed to a chronic health condition (including cancer) may fall under the 504 regulations that permit services for children who do not otherwise meet special education categorization. However, many may also be eligible for Individual Education Plans (IEPs) and access to a full range of special education services. The late effects of treatment of cancer in children often involve very specific disabilities, including visual perception, memory, processing speed, and sequencing. Problems in these areas are not commonly seen in children not treated for cancer. Over the past 20 years, investigators and parents have observed that the types of testing done for school problems do not include tests that identify the kinds of problems faced by children with cancer and are not helpful in determining the kinds of assistance that is needed. For these reasons, clinicians and parent advocacy groups are increasingly recommending that children treated for cancer receive a battery of tests that focus specifically on the patterns of difficulties that may be experienced. This kind of testing is commonly referred to as neuropsychological testing. What is Neuropsychological Testing? Neuropsychological testing involves giving a child a number of tests that provide information about how the brain works in the areas of memory, speed, language, visual processing, auditory processing, integration of information, emotional and behavioral regulation, and planning and organization. The tests are administered by a trained professional (either a licensed psychologist or someone supervised by the psychologist). The purpose is to provide a comprehensive, detailed assessment of a person’s ability to encode, process, store and express information. Interpretation of the various test results allows strengths and weakness to be uncovered. From this evaluation, recommendations can be made for accommodations to assist learning and functioning. The tests that are used are developed for children, and the tests that are given are selected to match the skills expected for children of specific age ranges. The child’s scores are based on how he or she performs compared to other children of the same age. Neuropsychological tests of specific abilities may vary, depending on the age of the child at the time of diagnosis and treatment, the age at the time he or she is tested, and what kind of treatment he or she received. What is a Neuropsychologist? A neuropsychologist is a licensed psychologist (Ph.D. or Psy.D.) who has special and advanced training in evaluating the relationship between the brain and behavior in a number of areas. A pediatric neuropsychologist is a licensed psychologist who has training and experience in understanding the relationship between the brain and behavior in children. This requires the additional skill of understanding how this relationship changes over time as the child’s brain grows and develops. Most states do not specifically license neuropsychologists, although the American Board of Professional Psychology does provide a board certification for advanced skills in the area. There is not a separate certification for pediatric neuropsychology. For these reasons, parents and referring physicians should determine whether the psychologist has obtained training in brain-behavior relationships, neuropsychological testing, neuro-rehabilitation, or other similar area, and whether he or she has experience evaluating children with complex medical problems that affect the brain. Knowledge about the disease process and the effects of chemotherapy and radiation may be critical to evaluations that are helpful. If parents are unsure of the qualifications of a psychologist to perform this type of evaluation, they should discuss this with their child’s oncologist to be sure that a knowledgeable person is available to perform the evaluation. Most pediatric oncology programs that participate in the Children’s Oncology Group have identified a qualified professional in this area. The neuropsychologist can help parents and teachers: - Understand how treatment has affected thinking, learning, and behavior. - Identify what services and accommodations may be needed in the educational setting. - Recommend interventions. - Advise on potential or likely improvements, problems and changes. Who Needs Neuropsychological Testing? Neuropsychological testing is usually indicated for children with: - Tumors of the central nervous system (CNS), especially those requiring cranial radiation, but including those who were treated with surgery only. - Acute lymphoblastic leukemia, particularly those treated with cranial radiation and those on more recent protocols that involved triple intrathecal chemotherapy and higher dose methotrexate during consolidation. - Diseases requiring radiation to the head, including those receiving total body irradiation prior to bone marrow or stem cell transplantation. - Very young children who receive intensive chemotherapy and require prolonged hospitalizations because of treatment and side effects. - Other children who are identified by the pediatric oncologists because of specific concerns about that child’s school performance, learning, or development. Because the effects of cancer treatment appear to emerge over time, a single testing will usually not be adequate to address the child’s needs as he or she develops. Obtaining an evaluation during the first year after diagnosis will help address immediate difficulties that may occur because of medication side effects or school absences, and will also provide a “baseline” for determining whether there are “late effects” when the child is a survivor. Re-evaluations every 18 months to 3-years (shorter intervals during the elementary years, with longer intervals in adolescence and young adulthood) are usually recommended to assess changes due to treatment and revise recommendations for intervention. Preparing the Child for Testing When the word “test” is used with a child treated for cancer, it inevitably brings up thoughts of needles, painful procedures, or confining spaces. For this reason, special care should be taken when talking with a child about neuropsychological testing. Both parents and the psychologist should be sensitive to this issue. There are several strategies that may be helpful: - Explain to the child that no needles or painful procedures are involved in this kind of testing. Neuropsychological testing is a “no stick zone.” - Explain that these are not tests that you pass or fail. They are tests that help the psychologist, parents, and child understand how the child’s brain works and how he or she best learns. The tests will hopefully identify areas where learning is difficult, but will also identify areas where the child does well. - Explain that some of the tests may be really easy, some may be very hard, and some may be in-between. No one is expected to be able to know or do everything perfectly, but each child should do his or her very best. However, if there is something that is too hard, after giving a good try, or there is something that the child has never heard before, then the best answer may be “I don’t know.” - Give the child a chance to ask questions, both of his/her parents before the testing session, and of the psychologist before and at appropriate points during testing. - Explain that the results of the test will be used to find ways to help make learning and doing well in school a bit easier and more successful. Parents should also prepare the child and themselves for the fact that neuropsychological testing requires time, a positive relationship between the child and the person administering the tests, and sensitivity on the part of the psychologist about special issues for the child with cancer (e.g., increased fatigue). In some cases, the psychologist may recommend conducting the evaluation over several sessions to be sure that the child’s performance is the best possible, and not influenced by fatigue. This kind of evaluation is not something that can be rushed, so parents and children should approach the testing session with patience. Parts of a Neuropsychological Evaluation The neuropsychological evaluation involves more than just giving a few tests. It is designed to provide a comprehensive picture of the way the child learns and develops, and therefore includes a number of important parts. Most neuropsychological evaluations include: - A detailed interview with the parents about the child’s medical, developmental, learning, behavioral, and social history, with specific attention to the type of disease, treatment involved, and results of any hearing, vision, and neuroimaging evaluations that have been done. A review of past and current medications should be included in this interview. - A brief family interview about family history of learning, attention, emotional, or behavioral problems. - Tests of IQ, academic achievement, and adaptive behavior. - Specific tests of visual and verbal memory, sequential memory, attention and concentration, visual-spatial-motor integration, processing speed, language ability (expressive and receptive), social perception, and executive function (planning, organization). - Other special tests may be included if there are sensory deficits (e.g., vision or hearing), motor impairments, or a history of learning problems prior to diagnosis of cancer. - Screening for social, behavioral, or emotional difficulties. Many of the tests used in neuropsychological testing are not included in the typical evaluation provided in the school system, although there may be some overlap on some tests. Interpretation of Results: The Report Often, parents and teachers view the neuropsychological evaluation as being primarily focused on what tests are given. However, the real benefit of a neuropsychological evaluation is the interpretation of the patterns of test scores, which leads to recommendations that may be helpful. The interpretation and recommendations rely on the knowledge and experience of the psychologist related to normal brain development, an understanding of the effects of disease and treatment on the developing brain, and an awareness of the multitude of complex educational, developmental, medication, and therapeutic interventions available. There are two common types of interpretations that usually come from a neuropsychological evaluation: - Current functioning. This type of interpretation provides information about how the child is performing at the time of testing compared to other children his or her same age. The psychologist also examines the patterns of these scores to determine if there are any inconsistencies between abilities, or if there are obvious delays. This point is important. What has been learned about late effects in childhood cancer is that IQ does not tell the story. Some areas of brain function do not seem to be at as great a risk for problems as others. For this reason, the absolute scores on any given test are not the major concern. Instead, the neuropsychologist examines patterns of abilities (e.g., visual vs. verbal memory, language vs. non-verbal problem solving). The neuropsychological evaluation often detects patterns of clearly defined strengths, as well as weaknesses. The IQ score may seem okay to parents, but the pattern of difficulties may represent a significant set of problems that need to be addressed, and may help parents and teachers anticipate problems that may be encountered in the future. This interpretation of the pattern is a major focus of the evaluation. - Functioning over time. This is a more complex interpretation, but one that can provide major benefit. This interpretation examines how scores on the various tests have changed over time. Functioning over time can only be done if there are previous neuropsych results for comparison. Based on knowledge about normal brain development, the type of treatment, and the age of the child at treatment, this type of interpretation may permit some limited predictions about future functioning. More importantly, it may also allow the development of interventions that can prevent or lessen new problems down the line. Parents should expect that the report and personal feedback session with the psychologist will provide them with a clear summary of the test results, an interpretation of what these results mean, and specific recommendations about how to address any problems that testing may have identified. Sometimes recommendations involve: - Additional testing by developmental specialists (e.g., speech and language, audiology). - Specific suggestions for how a child can be taught to read, do math, or write. - Suggestions for “accommodations” in the school setting (e.g., extra time to complete work, use of books on tape, use of calculators). - Consideration of evaluation for medications that may be helpful (e.g., stimulant medications for attention problems). - Referrals for counseling, therapy, or behavioral management to address behavior or emotional problems that were identified during the evaluation. - Suggestions about special school programs for the child with complex difficulties. If these recommendations are not provided, or if they are provided but don’t make sense, parents should always feel free to ask the psychologist to provide more information and explanation so that the information can be understood. - Neuropsychological testing can be expensive and time consuming, but it offers information that is typically not obtained in a standard school evaluation. - The results of the evaluation should be interpreted by a professional who has extensive training in interpreting complex test results. - Parents of children whose cancer or treatment involved the brain should be aware of the potential risk to their child’s learning and discuss neuropsychological testing with their child’s oncologist. Repeated evaluations (conducted every 2 to 3 years or if there are new, specific concerns) may be needed through high school, and perhaps into college. - The reason for the neuropsychological evaluation is to obtain information that can be used to have each child reach his or her maximum potential. Dr. Armstrong is Professor & Associate Chair of the Department of Pediatrics and Director of the Mailman Center for Child Development at the University of Miami School of Medicine.
Saffron bulbs should be planted after the first frost and before the first frost has passed and they should be placed in the top 3-4 inches of the soil from top of the soil line. If you have never planted saffron before, be careful. If you are new to the gardening world in the first place, keep reading and we’ll help you put it all together. How do I remove the buds from saffron plants? Simply trim away the green stem ends and leaves. Some varieties may not have a stem like this. Should I place saffron bulbs in a terrarium to allow them to grow naturally in the terrarium? Yes. There is no reason to not allow the saffron bulb plants to grow and thrive in the terrarium in addition to the natural environment of the terrarium. Is it best to put saffron bulbs at one end of an aquarium window? Yes, this is one great tip because it is easy for your plants to move and see, thus giving them room to expand. Can I plant saffron bulbs in a room with a large window or a window ledge that opens directly from the garden, as opposed to the lower level windows or doors? If you have the space, put it in the center of an open room. It will grow, just not as big as your regular windows or doors. Will saffron bulbs harm my plants? There is a lot of misinformation out there about saffron. In reality though, most saffron bulbs are safe for plants. This goes for plants that have been in soil for about two weeks or more. When your plants are fully grown, it is the same as any other species of natural product. Keep in mind that the soil will not allow the saffron bulb to grow as it was before that time, so it is best to only use natural products. In the end, the best answer will be based on each and every one of your individual needs and preferences. However, in my opinion, I like to have the saffron bulbs in my garden from the start and use them almost solely indoors. I only grow them outdoors for the weather events because they need room to grow without being covered, not to mention that because they grow with regular sunlight and regular heat, they can become a full blown indoor plant once they reach a certain size. With their small size, you how many years ago was the first saffron planted seatsaver, saffron crocus companion plant, saffron crocus planting guide, saffron bulbs planting time, how many years ago was the first saffron planted earth
Pleiades, in astronomy, a cluster of stars in the constellation Taurus. With a telescope, several hundred stars are visible in the cluster. However, only six stars can be easily seen with the unaided eye. On a very clear night, several more can sometimes be seen. Alcyone is the brightest star in the Pleiades. Bright nebulae (clouds of gas and dust) surround several of the stars in the cluster.
Now is an outstanding time of year to view what is sometimes referred to as the "Winter Circle" of dazzling stars, which includes Sirius (in Canis Major), Procyon (in Canis Minor), Menkalinan and Capella (in Auriga), and the Twins of Castor and Pollux (in Gemini). The Winter Circle was previously discussed in a post from 2011, which you can find here. Now that the moon is declining towards the New Moon of December 22, it will be less and less of a factor in the night sky (it will rise later and later in the "wee hours" of the morning, or closer and closer to dawn, and as it does so it will also grow thinner and thinner), enabling you to really observe the starry sky in all its glory -- and the glorious constellations of winter are at center stage, featuring mighty Orion and the surrounding arc of bright stars mentioned above. Below is an image from Stellarium.org showing Orion and the stars of the Winter Circle, as they appear to an observer in the northern hemisphere around thirty-five degrees north latitude: You can clearly make out the silvery band of the Milky Way, running up and to the right in the above image, almost through the center of the screen. Nearly half-way up the Milky Way band, look for the three distinctive stars of Orion's belt, in a tight line angled up and to the right. Following the line of these three stars and extending that line down and to the left you will find Sirius, which is labeled, and which is depicted as the largest star on the above chart, because it is the brightest star in our sky (other than the sun, of course). From Sirius, you can then trace the arc of stars named above, beginning at Sirius and moving clockwise up to Procyon (also labeled), Pollux and Castor (only Pollux is labeled but Castor is very close, up and to the right from Pollux in the screen above), then Menkalinan and Capella (only Capella is labeled, but Menkalinan is the star you come to first as you arc from Pollux and Castor towards Capella in a clockwise direction). From Capella, you can also cross the Milky Way again and find the gorgeous cluster of the Pleiades (not labeled on the above chart, but more on them in a moment). This circle of brilliant stars is sacred to the Lakota, and are part of the area of the sky known as "The Heart of Everything That Is." The circle just described was also connected to the concept of the Sacred Hoop, discussed in this previous post. The celestial component of this sacred concept is discussed at length in a book entitled Lakota Star Knowledge, written by Ronald Goodman with help from many Lakota wisdom keepers, and with appendices which quote teachings preserved by Charlotte A. Black Elk. The book is published by Sinte Gleska University which strives to perpetuate the values associated with the four Lakota virtues of the Lakota medicine wheel and Sacred Hoop, as explained on the back cover of the book. It is a book which those interested in this subject will want to have in hardcopy. image: Wikimedia commons (link). As described in the vision of Black Elk, the Sacred Hoop consists of a sacred circle which contains the horizontal road and the vertical road (see discussion in this previous post and this previous post), a pattern which is also very reminiscent of the zodiac wheel crossed by the horizontal line between the equinoxes and the vertical line between the solstices: Ronald Goodman's book explains that the circle of stars now visible in the night sky make this same Sacred Hoop pattern of a circle divided by two perpendicular lines. The two lines are envisioned as being generated by the line created by the belt of Orion (these stars are known as Tayamni by the Lakota) which can be seen as extending to Sirius in one direction and to the Pleiades in the other direction, and by the line perpendicular to that line which is created by extending the imagined line running between the two bright stars Betelgeuse (in Orion's shoulder) and Rigel (in his foot): Above, I have sketched in the outline of a rough circle which connects the circle of stars: Sirius to Procyon to Pollux and Castor to Menkalinan and Capella to the Pleiades to Rigel and then back to Sirius. Within it, I have created dashed-lines which cross perpendicular to one another: one line along the line suggested by the belt stars and extending all the way to Sirius in the lower-left and to the Pleiades in the upper-right, and another running from Rigel to Betegeuse (and which can be imagined as continuing through all the way to the other side of the hoop from there). This diagram is based on those drawn in the Ronald Goodman book in numerous places: I have just chosen to draw it on the stars as seen in the night sky using the image from Stellarium.org. It is hoped that this will help readers to go outside and actually locate this important set of stars. Perhaps the most remarkable information expressed by Ronald Goodman and the Lakota wisdom keepers he quotes in the book is the fact that this celestial Sacred Hoop has a corresponding reflection on the earth, which the Lakota have recognized since time immemorial -- from before the horse arrived -- and that they would move to specific points on the terrestrial Sacred Hoop at specific times during the year, to reflect on earth the patterns of the stars in heaven, the motion of those stars through the year, and especially the rising of the sun in the different points along its ecliptic path as the earth progresses through its own annual cycle. The reflection of the celestial Sacred Hoop was found on earth in the region of the Black Hills, or Paha Sapa in the language of the Lakota (I believe that this means "Black Hills"). Below is a diagram based on some of the terrestrial points in this Sacred Hoop, as explained in the book and drawn in some diagrams in the book -- I have chosen to use Google Maps with the "terrain" overlay, to show some of these points in a way that will enable us to visualize these sacred sites as we look at the map: The first point labeled on the map above, identified with the numeral "1." and a small black arrow pointing to the right (difficult to see clearly at this resolution, but it is pointing to the right) is Inyan Kaga, also called Harney Peak, a very sacred site to the Lakota and one which is central to the vision of Black Elk and to the story of his life which he relates in Black Elk Speaks. The book by Ronald Goodman seems to indicate that Harney Peak is also called Opaha Ta I. This sacred mountain corresponds to the Pleiades, or Wicincala Sakowin. The second point labeled on the map above, identified with the numeral "2." and a black rectangular outline, contains three peaks in a near-perfect line, pointing towards Harney Peak -- just as the three stars of Orion's Belt (Tamanyi) point to the Pleiades (Wicincala Sakowin). Below, some "zoomed-in" maps will show this in greater detail. The third point labeled on the map above, identified with the numeral "3." and an small black arrow pointing down, corresponds to Pe Sla, the center of the Black Hills -- an area now labeled as Reynolds Valley on maps. The fourth point labeled on the map above, identified with the numeral "4." and a small black arrow pointing down, corresponds to Mato Paha, or Bear Butte. This site appears to have been considered the terrestrial reflection of the point marked by the star Capella in the celestial Sacred Hoop. The fifth and final point labeled on the map above, identified with the numeral "5." and a small black arrow pointing down, is Mato Tipila Paha, or Devil's Tower. This majestic geological formation was considered to be associated with the constellation of Gemini, and the summer solstice. Note that on the zodiac wheel diagram above which I believe can be seen to correspond in many ways to the Sacred Hoop, the sign of Gemini is located immediately before the point of summer solstice. Lakota Star Knowledge explains that prior to summer solstice, all the Lakota would converge on Devil's Tower, for an important gathering which included the most important Sun Dance of the year. It should be noted that the Sacred Hoop in the sky as shown in my Stellarium diagram must be rotated in order to correspond to the sacred terrain of the Black Hills: the line running from the rectangle at "2." to the Inyan Kaga (Harney Peak) at "1." corresponds to the dashed-line running up and to the right in the star chart, from Orion's belt to the Pleiades. Below is a closer "zoom" into the area containing Tayamni (Orion's belt) on the terrain: In this map, we are still "far enough out" that you can see Inyan Kaga (Harney Peak), indicated by the small black arrow to the lower-right of the larger rectangle. If you imagine three peaks within that rectangle, aligned in such a way that they create a mental line pointing to Harney Peak, then you can see that Orion's belt in this map will point "down and to the right" to get to the Pleiades (represented by Harney Peak). Below, we zoom-in on the area in the black rectangle from the map above: You should be able to plainly see the three stars of "Orion's belt" -- they are marked with the "hourglass" symbol of a "cone inverted over a cone," which Ronald Goodman explains in his book should be thought of as a vortex over a vortex: the upper vortex being the star and the reflected vortex below representing "the related earth site" (page 2 of the book). I have placed the double-vortex star symbols just below and slightly to the left of each mountain on the terrain map: hopefully you can make out the three peaks, pointing in a line towards Harney Peak (which is not visible in this map, but would be located off the map, down and to the right -- see map immediately above this one). It would be difficult to overstate the importance of the sacred Black Hills to the Lakota. Their movement throughout the year to the various sites were seen as participation in the renewal of the world. Appendix D of the book contains words from Charlotte A. Black Elk, in which she says that the pattern of movement through the sites in the Black Hills is "traces the renewal of creation and the spiritual regeneration of the Lakota" (50). Later, she says: We say that Wakan Tanka created the Heart of Everything That Is to show us that we have a special relationship with our first and real mother, the earth, and that there are responsibilities tied to this relationship. Wakan Tanka placed the stars in a manner so what is in the heavens is on earth, what is on earth is in the heavens, in the same way. When we pray in this manner, what is done in the skies is done on earth, in the same way. Together, all of creation participates in the ceremonies each year. [. . .] So, tonight, walk outside and look up. See the Black Hills Sacred Ceremonies of Spring, and you will understand and know why this place is special and stands first among all places of Maka. And return, in the manner the Lakota have done for thousands of years, to the Heart of Everything That Is, to the heart of our home and the home of our heart. 52. There is much to contemplate deeply in these things. I hope that if you are able to do so you can go outside at this time of year, and observe the stars, and as you do so you can reflect upon the Sacred Hoop and the Heart of Everything That Is.
Complete all of the items below to achieve Stage 4, showing that you have thought about the potential risks and how to stay safe for each activity - Create a portfolio of digital media. It might include artwork or a photograph that you alter using creative tools, music, animation, CAD (Computer Aided Design) or 3D sculpture. - Create a film, video, stop-motion animation or podcast and share it using a suitable media sharing tool. - Create a social network profile for your section, a band, local interest group or something similar. - Alternatively you could make a small website that can host content, like the film you made in step 2, photos, poetry or information about your local area. - Use the internet for research: - Choose a local, national, community or Scouting issue, or something from the news or current affairs. - Collect information from different sources, such as spreadsheets, databases, online news services and ‘open access’ data sources. - Put your information together in a structured way, for example grouping similar information. Make sure you know where each piece of information comes from. - Select the information you think is most appropriate and reliable. - Create a multi-page website with your information and make it public. Present your information in a variety of ways – you could use infographics, images or graphs. - Share your website with a wide audience - Explain your sources of information and why you picked out what you did. - Get some feedback on what you have done and make changes to improve your website based on that feedback.
When it comes to contemplating the state of our universe, the question likely most prevalent on people’s minds is, “Is anyone else like us out there?” The famous Drake Equation, even when worked out with fairly moderate numbers, seemingly suggests the probable amount of intelligent, communicating civilizations could be quite numerous. But a new paper published by a scientist from the University of East Anglia suggests the odds of finding new life on other Earth-like planets are low, given the time it has taken for beings such as humans to evolve combined with the remaining life span of Earth. Professor Andrew Watson says that structurally complex and intelligent life evolved relatively late on Earth, and in looking at the probability of the difficult and critical evolutionary steps that occurred in relation to the life span of Earth, provides an improved mathematical model for the evolution of intelligent life. According to Watson, a limit to evolution is the habitability of Earth, and any other Earth-like planets, which will end as the sun brightens. Solar models predict that the brightness of the sun is increasing, while temperature models suggest that because of this the future life span of Earth will be “only” about another billion years, a short time compared to the four billion years since life first appeared on the planet. “The Earth’s biosphere is now in its old age and this has implications for our understanding of the likelihood of complex life and intelligence arising on any given planet,” said Watson. Some scientists believe the extreme age of the universe and its vast number of stars suggests that if the Earth is typical, extraterrestrial life should be common. Watson, however, believes the age of the universe is working against the odds. “At present, Earth is the only example we have of a planet with life,” he said. “If we learned the planet would be habitable for a set period and that we had evolved early in this period, then even with a sample of one, we’d suspect that evolution from simple to complex and intelligent life was quite likely to occur. By contrast, we now believe that we evolved late in the habitable period, and this suggests that our evolution is rather unlikely. In fact, the timing of events is consistent with it being very rare indeed.” Watson, it seems, takes the Fermi Paradox to heart in his considerations. The Fermi Paradox is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence for, or contact with, such civilizations. Watson suggests the number of evolutionary steps needed to create intelligent life, in the case of humans, is four. These include the emergence of single-celled bacteria, complex cells, specialized cells allowing complex life forms, and intelligent life with an established language. “Complex life is separated from the simplest life forms by several very unlikely steps and therefore will be much less common. Intelligence is one step further, so it is much less common still,” said Prof Watson. Watson’s model suggests an upper limit for the probability of each step occurring is 10 per cent or less, so the chances of intelligent life emerging is low — less than 0.01 per cent over four billion years. Each step is independent of the other and can only take place after the previous steps in the sequence have occurred. They tend to be evenly spaced through Earth’s history and this is consistent with some of the major transitions identified in the evolution of life on Earth. Here is more about the Drake Equation. Here is more information about the Fermi Paradox. Original News Source: University of East Anglia Press Release
This Penguins lesson plan also includes: - Join to access all included materials Students explore all the aspects of penguins. They access that penguins are birds but they don't fly. Students analyze where penguins are found and the life cycle of penguins. They also inquire about the physical characteristics and adaptations of penguins.
Written By: John Edwin Mason Gordon Parks, the renowned LIFE magazine photographer, and Ralph Ellison, the acclaimed novelist, shared a vision of Harlem and that vision was grim. During the decade after the end of World War II, they collaborated twice on projects intended to reveal underlying truths about the New York City neighborhood that was sometimes called the capital city of black America. Their first effort, in 1947, was never published. In the second, “A Man Becomes Invisible,” which appeared in LIFE on Aug. 25, 1952, Parks interpreted Ellison’s recently published novel, Invisible Man, through images that were by turns surreal and nightmarish. Parks and Ellison were friends as well as collaborators, and both were strangers to Harlem. Their roots, in Kansas and Oklahoma respectively, were culturally and geographically far removed from what Parks once called Harlem’s “shadowy ghetto.” While other African American artists celebrated Harlemites’ cultural achievements, Parks and Ellison both mourned the psychological damage that racism had inflicted on them. There was no room in their Harlem for a Duke Ellington or a Langston Hughes, or even for the ordinary pleasures of love and laughter. Instead Harlem was, in Ellison’s words, “the scene and symbol of the Negro’s perpetual alienation in the land of his birth.” It is likely that the idea for the visual homage to Invisible Man came from Parks. The book was, after all, the work of a close friend and had been partly written while Ellison was housesitting for the Parks family in 1950. Invisible Man had been published to nearly universal critical acclaim and was one of the most talked about books of the year. “A Man Becomes Invisible” was not LIFE’s first visualization of a book by an African American writer. For “Black Boy: A Negro Writes a Bitter Autobiography,” published in 1945, photographer George Karger recreated scenes from Richard Wright’s highly praised memoir. The images were dramatic but straightforward, illustrating the book rather than interpreting it. Parks, on the other hand, produced a self-consciously subjective interpretation. Both of Parks’ collaborations with Ellison were the subject of Invisible Man: Gordon Parks and Ralph Ellison in Harlem, an exhibition at the Art Institute of Chicago. Writing in the exhibition’s catalog, curator Michal Raz-Russo noted that Parks made dozens of photographs for “A Man Becomes Invisible.” Many were gritty Harlem street scenes that Parks shot in a documentary mode. In others, he staged scenes in order to capture the surreal elements of Ellison’s novel. Raz-Russo argues that the photographic record suggests that Parks envisioned a more comprehensive interpretation than was possible in the three pages that his editors gave him. (Parks’ memoirs are silent on the matter.) In print, LIFE published four of Parks’ photographs, each more surreal than documentary. The magazine’s most alert readers would have noticed that the first photograph in “A Man Becomes Invisible” did not depict a scene that appeared in Invisible Man. Instead it extended Ellison’s narrative, as Matthew S. Witkovsky, head of the photo department at the AIC, notes in his catalog essay. Occupying most of the page, it showed the novel’s unnamed narrator emerging through a manhole on a Harlem street. Below him was the sanctuary that had been his escape from the absurd and brutal forces of racism that had nearly destroyed him. Ellison had ended his story with his narrator preparing to reenter the world, but not having done so. Parks visualized the narrator’s reentry, capturing the wariness that he would have felt. In the final photograph, Parks depicted the novel’s signature scene: the narrator in his underground lair, where he fought off his sense of invisibility in the glow of 1,369 lightbulbs, drinking sloe gin and listening to Louis Armstrong records. Above him burned the lights of New York’s nighttime skyline. A composite of two negatives, the image was a metaphor for the psychological damage that racism had inflicted on the narrator and, by extension, on all black Americans. Two nightmarish photographs were included in the photo-essay. In the first, Parks created a hallucinatory image out of what had been a straightforward documentary photograph of a Harlem shop window filled with religious symbols and a skull. Farther down the page, he evoked a moment in which the leader of a stand-in for the Communist Party that Ellison called “the Brotherhood” attempted to intimidate the narrator, who had come to believe that the group was exploiting him, by removing his glass eye and tossing it into a glass of water. The uncredited text that accompanied Parks’ photographs flattened the novel’s plot considerably, emphasizing its anti-Communist elements and downplaying its critique of American racism. LIFE wrote that Parks captured “the loneliness, the horror and the disillusionment of a man who has lost faith in himself and his world.” As Raz-Russo wrote in her catalog essay, “A Man Becomes Invisible” “remains an important tribute to and interpretation of Ellison’s seminal novel.”
As some of you now know, finally I have something that might be considered close to a dream job: I’m now a researcher for Impossible Pictures, the company that did Walking With Dinosaurs, Primeval and a host of other things (website here). This job isn’t going to be forever, but it’s a start, and it doesn’t make me feel any less bitter about being unable to get a job in academia. It means lots of expensive commuting (I don’t live in London, where the company’s based), and it also explains the recent lack of blog posts. But I’m not complaining. So, after turtle genitals and ostrich dinosaurs, I’ve decided to come back with a bang, and blog about… sheep. Sheep are caprine bovids closely related to goats, and the two groups must have shared an ancestor highly similar to the Barbary sheep, Aoudad or Arrui Ammotragus lervia (Cassinello 1998). While caprines may have originated during the Late Miocene in the Mediterranean region (Ropiquet & Hassanin 2004), sheep might be ancestrally Asian given that the oldest sheep (Ovis shantungensis) is from the Pliocene of China. Unlike goats, sheep possess preorbital, interdigital and inguinal glands, they lack a beard, possess a short tail, and usually exhibit spiralling, supracervical horns. The true sheep belong to the genus Ovis: the bharals or blue sheep Pseudois aren’t true sheep, nor is the Barbary sheep. It’s been suggested at times that takin Budorcas and the recently extinct Balearic cave goat Myotragus balearicus are the closest relatives of true sheep. Given that, in archaeological or palaeontological samples, it is difficult to distinguish the incomplete postcranial remains of sheep from those of goats, indeterminate sheep or goat remains are often dubbed ‘ovicaprin’. As Derek Yalden wrote in The History of British Mammals, this is a pretty daft name given that there is no such animal as Ovicapra. Indisputable sheep are usually divided into three genetic groups: the Asian argaliforms, the mouflon-like moufloniforms, and the mostly American pachyceriforms. Argaliforms are relatively gracile sheep, suited for life in open, rolling habitats, and they have often been regarded as being represented by two Asian species: the Urial O. vignei and Argali O. ammon. It now seems, however, that these two are not close relatives, and that the Urial is a moufloniform (Bunch et al. 2006). Known from Iran in the west to Pakistan and India in the east, Urials are normally light brown, with a long, light-coloured neck ruff and – for a sheep – a rather long, thin tail. Like several of the sheep species, the species is polytypic, and experts remain undecided on which of the many named subspecies really are valid taxa. For the record, the most widely accepted forms are the Ladakh urial O. v. vignei, Transcaspian urial O. v. arkal [shown in adjacent image], Bukhara urial O. v. bocharensis, Afghan urial O. v. cycloceros and Punjab or Salt Range urial O. v. punjabensis. Most urial populations are declining and in danger of extinction, and none of the subspecies listed here have estimated populations exceeding 12,000. It used to be widely thoughts that urials were conspecific with mouflons (which explains why the urial is named O. orientalis in some sources), but this has been largely rejected on the basis of chromosome counts and other differences. The largest and arguably most magnificent of wild sheep is the Argali, a wide-ranging species (occurring from Afghanistan to southern Siberia) that possesses prominently ribbed, enormous spiralling horns. Some forms, such as the Tibetan argali, possess a neck ruff. The large body size and hypertrophied display organs of argali have led some to regard them as hypermorphic compared to other sheep. A confusing array of argali subspecies (15 in total) have been named, and it has proved difficult to determine which represent valid taxa, which are hybrids, which are simply the result of phenotypic plasticity*, and even which are correctly assigned to the right species (Geist 1991). The Severtzov’s or Kyzylkum sheep O. a. severtzovi, for example, was conventionally classified as a form of urial** until chromosome counts showed that it was actually an argali (Bunch et al. 1998). Arkal sheep from Iran, Turkmenistan, Uzbekistan and Kazakhstan, conventionally regarded as the urial subspecies O. v. arkal, have recently been found to be genetically close to Kara Tau argali O. a. nigrimontana and are thus probably members of the argali species as well (Hiendleder et al. 2002). * That is, members of some populations may reach larger body sizes and grow big horns because they have a particularly good diet, or inhabit a particularly favourable environment, compared to other populations. ** Though, actually, it was first named (in 1914) as a distinct species. The seven most widely accepted argali subspecies are the Altai argali O. a. ammon, Kara Tau argali O. a. nigrimontana, Nyan or Tibetan argali O. a. hodgsonii [shown in image above], Pamir argali or Marco Polo argali O. a. polii, Tien Shan argali O. a. karelini, Gobi argali O. a. darwini and the probably extinct Northern Chinese or Shansi argali O. a. jubata. Of these, recent studies have supported the validity of at least Altai argali, Kara Tau argali, Tibetan argali, Marco Polo argali and Gobi argali and, among the additional subspecies, the Gansu argali O. a. dalailamae, Kuruktag argali O. a. adametzi, Sair argali O. a. sairensis, Severtzov’s or Kyzylkum sheep O. a. severtzovi, Kazakhstan argali O. a. collium and Littledale argali O. a. littledalei have also been regarded as valid taxa (Hiendleder et al. 2002, Wu et al. 2003) [adjacent picture shows Littledale argali. Unfortunately, many of the subspecies are rarely photographed except as hunter’s trophies, so I’m sorry that photos of dead animals are the only ones we have available. While hunting of these animals does, arguably, add to the local economy, it is debatable whether sustained hunting is wise given the low numbers of some populations]. Perhaps the best known of these animals (viz, the one usually mentioned and/or illustrated in books) is the remarkable Marco Polo argali from the Pamir Mountains of Afghanistan, Kyrgyzstan, Tajikistan, Pakistan and China. It possesses the longest horns of any argali, with individual horns recorded as reaching nearly 2 m (measured along the spirals) in some individuals. Despite frequent comments that it is the largest argali and thus the largest sheep, it is apparently exceeded in size by record-holding Altai and Gobi argalis. The size and fantastic appearance of these sheep makes them attractive targets for trophy hunters and, as an example, in Tadjikistan in 1995, hunters were able to kill argali for a fee of US$10,000-20,000. North America is home to two species of wild sheep. The Bighorn O. canadensis occurs from northern Mexico to SW Canada, occurring in mountainous environments as well as deserts. Bighorns are not, apparently, named for their big horns. Rather, they are named for the Big Horn Mountains of Wyoming and Montana, though unfortunately I forget where I read this. Whatever, the horns are indeed big in the mountain-dwelling bighorns, and can have a combined weight of almost 20 kg, which is about equal to the weight of the rest of the skeleton. Fights between males [see adjacent pic] have been recorded as lasting over 25 hours, though this is not constant fighting of course and involves lots of resting and posturing. Bighorn do have accidents while climbing and fall to their deaths, but data indicates that such incidences are rare, with, for example, five deaths out of an annual count of 42 being due to falling (Kamler et al. 2003). For whatever reason, bighorns in some populations seem to have, err, flexible sexual preferences and it has even been claimed that some bighorn populations are ‘homosexual societies’ where same-sex courtship and mating is routine among males (Bagemihl 1999). Courting males use the same routine of stylized gestures and postures as do male-female pairs, including a low-stretch, a head twist, a foreleg flick, and mutual grooming, head-rubbing and genital licking, and culminating in penetration. Apparently, some female bighorns morphologically mimic males and thereby solicit extra attention. Freemartins have been recorded among bighorns (Kenny et al. 1992) [freemartinism describes the condition where the female twin of a male develops intersexual characters: she may be born with both male and female genitalia, and display male appearance and behaviour]. You’ve probably heard that research on ‘gay’ sheep has been plagued by controversy (Steve Bodio covered this subject here), and I find it a difficult area to discuss as I just don’t know how far we should go in comparing this behaviour with that of our own species. It is, however, always worth reminding people (especially homophobes and right-wingers) that, despite what they say, homosexuality is not ‘unnatural’ given that it is pervasive and universal in the animal kingdom. And on that note, I leave. More on sheep at another time. Refs – – Bagemihl, B. 1999. Biological Exuberance. Animal Homosexuality and Natural Diversity. St. Martin’s Press, New York. Bunch, T. D., Vorontsov, N. N., Lyapunova, E. A. & Hoffmann, R. S. 1998. Chromosome number of Severtzov’s sheep (Ovis ammon severtzovi): G-banded karyotype comparisons within Ovis. The Journal of Heredity 89, 266-269. – ., Wu, C., Zhang, Y.-P. & Wang, S. 2006. Phylogenetic analysis of Snow sheep (Ovis nivicola) and closely related taxa. Journal of Heredity 97, 21-30. Cassinello, J. 1998. Ammotragus lervia: a review on systematics, biology, ecology and distribution. Annales Zoologici Fennici 35, 149-162. Geist, V. 1991. On the taxonomy of giant sheep (Ovis ammon Linnaeus, 1766). Canadian Journal of Zoology 69, 706-723. Hiendleder, S., Kaupe, B., Wassmuth, R. & Janke, A. 2002. Molecular analysis of wild and domestic sheep questions current nomenclature and provides evidence for domestication from two different subspecies. Proceedings of the Royal Society of London B 269, 893-904. Kamler, J. F., Lee. R. M., DeVos, J. C., Ballard, W. B. & Whitlaw, H. A. 2003. Mortalities from climbing accidents of translocated bighorn sheep in Arizona. The Southwestern Naturalist 48, 145-147. Kenny, D. E., Cambre, R. C., Frahm, M. W. & Bunch, T. D. 1992. Freemartinism in a captive herd of Rocky Mountain bighorn sheep (Ovis canadensis). Journal of Wildlife Diseases 28, 494-498. Wu, C. H., Zhang, Y. P., Bunch, T. D., Wang, S. & Wang, W. 2003. Mitochondrial control region sequence variation within the argali wild sheep (Ovis ammon): evolution and conservation relevance. Mammalia 67, 109-118.
Leningrad Zoo is one of the oldest zoos in Russia. Having preserved the original outline of the late 19th century, the Zoo is currently a part of the city architecture and its historical heritage. The Zoo was founded at the centre of St. Petersburg on August 1 (old calendar, August 14 – new calendar) in 1865. Originally, it was a private menagerie. It was open and funded by a Dutch Prussian citizen Julius Gebhardt and his wife Sophia. Later on, it changed many owners for more than 50 years remaining private. The Zoo became state-run in 1917 when, after the revolution, the new government passed a “Decree on the Nationalization of Private Places of Entertainment”. After the nationalization of the Zoo, within the period of 1918-1941, an Academic board was formed which consisted of all the leading scientists of the time. During those years, a scientific library was created. The Zoo also became an institution to carry out scientific research and send people on scientific expeditions. At this period a Young Zoologists Club (YZC) was founded which has existed in the Zoo up until today. Together with the city, the Zoo survived the most horrific page in its history: the Great Patriotic War and the Siege. Many staff members were off at the front and a great deal of them were evacuated to Kazan. Still, many animals and keepers remained in the city. The Zoo workers kept taking care of the animals and demonstrating them to the visitors. There has been only one closure of the Zoo in the horrible winter of 1941-1942, but as early as in the spring of 1942, it was opened again. About twenty people saving the animals during the war accomplished a real feat. Many of them were living right here in the Zoo to be closer to their animals. Sixteen Zoo workers were later awarded a medal “For the Defence of Leningrad”. In remembrance of their feat, it was decided not to change the name of the Zoo, but to keep the old one: Leningradsky Zoopark. To the right of the central entrance to the Zoo, there is a plaque to commemorate those horrific years. There is also a museum “The Zoo during the Siege of Leningrad” here in the Bear House which survived the Great Patriotic War and is the oldest building at the Zoo today. In this museum you can learn about the domestic life of the zoo workers during the war years and get a detailed account of our colleagues’ feat during the Siege. During the early post-war years, the Zoo was recovering rapidly. By 1951 its collection consisted of more than 150 species. The Zoo had acquired the animals who later became the founders of animal dynasties. Thus on August 9,1956, there came a couple of giraffes: Malchik and Juliette, who have given birth to 12 offsprings (a world record). Their youngest granddaughter Sonya is still living at the Zoo. In those years, a good tradition of thematic educational events for the visitors was established at the Zoo. Beginning from 1948, the staff organised the Days of Birds with their education quizzes and competitions for the best nesting box. From 1951 on, New Year celebrations were added. Today we arrange thematic days twice a month on average. They are typically dedicated to different memorable dates and animals from our collection. On these days, we have free tours around the Zoo, games, quizzes, and feeding shows. Today Leningradsky Zoopark remains a unique museum of wildlife in St. Petersburg as well as a conservation and educational institution. It continues its unceasing development trying to be up-to-date, enhancing animal welfare with the aim for best European standards, and improving the quality of its service. For more information, please see our Development page. Unfortunately, pre-revolution and even pre-war archives have not survived to these days as in 1940 they were burnt down by someone’s order from a higher authority. Up to this moment the Zoo has collected some historical archives, a good photographic gallery, a big collection of slides, but they all began accumulating only about 40 years ago. However, the city archive has preserved quite a number of materials. A lot of interesting details can be learned from newspaper and magazine publications in the library of the Academy of Science and the Public Library. And in the library of the Zoo, there survived a very unique volume: “The History of Leningrad Zoological Garden”. The Zoo welcomes everyone who has photos, printed materials, just memories about the pre- and post-war Zoo to help reconstruct its history. Please e-mail [email protected] or call us +7 (812) 232 8260
To help students understand that the patterns of stars in the sky stay the same and different stars can be seen in different seasons. In this lesson, students learn more about the patterns of stars in the night sky by engaging in an interactive activity called Star Search as well as a hands-on activity where they explore the night sky. These activities help emphasize that stars appear to move during the night, but really the earth is rotating and we are moving. As the earth makes its annual journey around the sun, different star patterns are seen in the night sky. The Motivation uses a tool that shows what the night sky looks like on any given date and is used to show that, from month to month, the constellations have a different location in the sky. Then, in the Development, students should do the Star Search activity, which delves a bit deeper into specific constellations and the stories behind them. It introduces constellations in the night sky and the concept that the locations of these constellations are different depending on the season. The resource is broken down into the night sky of the four seasons (of the Northern Hemisphere), and students learn to identify the locations of four constellations in each season using the constellations’ alpha stars. Finally, students make a sky map which will guide and inspire them to do their own star gazing. The hands-on activities are an introduction to star gazing. This lesson should provide the knowledge and inspiration to do some star gazing at home in the real night sky. According research, there are no indications of misconceptions related to constellations. However, the research base shows that it is counterintuitive for students to understand that the sun is a star (and that planets orbit the sun). (Benchmarks for Science Literacy, p. 335.) While this lesson does not address our solar system, the topic of the sun may come up since stars are the focus. Keep this misconception in mind as you pursue space-related activities with your students. Students in grades three through five will be curious and ask questions about concepts dealing with magnitude. For instance, “How many stars are in the sky? Or in our universe? How far away are the constellations?” Some of your students will find these questions appealing. While the enthusiasm should be encouraged, the ability to understand scales of size will come in later grades. (Benchmarks for Science Literacy, p. 62.) Benchmarks also points out that learning the names of the constellations is not important in itself. This is in line with the National Science Education Standards’ guidance of putting less emphasis on “knowing scientific facts and information” and more emphasis on “understanding concepts and developing abilities of inquiry.” (National Science Education Standards, p. 113.) Keep in mind that planets, such as Mercury, Venus, Mars, and Jupiter, can sometimes be seen in the night sky. While the stars make their own light, light from the planets is reflected sunlight. This lesson will talk about constellations as fixed patterns in space and their movements in the night sky a result of the Earth moving around the sun. The planets other than Earth are also moving around the sun and their movements are not relative to the movements of the constellations. A quick search on the Internet can give guidance as to which planets are currently prominent and how to locate them. Also, when students go home to look at the night sky, mention that they may want to look for the Milky Way. The Milky Way is our galaxy of about 200 billion stars and looks like a “river” or “band” of stars in the night sky. Late summer or winter evenings in a very dark sky are best for spotting the Milky Way. The focus of the lesson—that the patterns of stars are fixed and move across the sky gradually—is the concept for students to retain. We recommend that you take five minutes to become familiar with the Starry Night Sky Chart and bookmark it. Have your projector set up before the lesson begins. The Starry Night Education site has a Starry Night Sky Chart that allows you to put in your zip code and see what the night sky looks like in your area. Be sure that under the options menu there are check marks next to “Show Constellations,” and “Label Stars” and “Show Horizon.” Before projecting the image, get a baseline of your students’ knowledge by asking these questions: - Do you know what a constellation is? - (Students will likely know, but if you need to provide an answer, you can say it is a group of stars that makes a pattern and has a name.) - How many of you have ever spotted the big dipper in the night sky? - (Encourage students to talk about other constellations they may have seen. If your students are familiar with the Big Dipper, proceed to the Starry Night Sky Chart.) - Have any of you ever gone star watching? - (If any of your students have done this, encourage them to share their experiences.) Show students that you are putting in your zip code and pulling up the night sky in your area. Point out the marked constellations and tell students that the grouping of stars that make a constellation form a pattern. Note what month it is and ask students if they think the constellations will appear in the same place next month. Take some answers and then point out one constellation and tell students to remember where it is and what it looks like. Then, under the Date & Time menu, go to the next month. What you will notice is that the constellation has moved. You can demonstrate this as you move forward with the next month. You may want to do several months and you may want to jump back and forth a bit to emphasize that the constellations are moving from month to month. - Are you surprised that the night sky changes? - The sky appears to be different from night to night, but what stays the same? - (Students should have observed that even though the constellations appear to move across the sky, the constellations themselves are still the same. If they have not understood this yet, it is ok as this will come up again in the lesson.) Tell students to use their Star Search student esheet to read instructions on how to use the Star Search interactive activity. They will click on a constellation box, find it by determining its alpha star, and then read the Greek mythological story about it. Make a point of telling students that the names of the constellations come from myths and that myths are stories or tales that are not based on fact. Students are instructed to find the four constellations under “Spring:” The Big Dipper, Leo, Hydra, and Auriga. After students have found and read about these four, ask: - Were the constellations easy to find and how did you go about doing so? - (Gauge whether or not students used the alpha star to help them. They could feasibly click on each alpha star to find the constellation. Encourage students to look at the pattern of stars to find the constellations.) - What did you learn about the constellations? - (It is in this activity that students will learn about the ancient Greeks and the stories that describe the constellations. This is an opportunity for students to talk about the characters.) Instruct students to go back to their Star Search esheet. There they should click the “Learn More” button, which gives them the background on how the constellation characters came to be. Then they should find the four constellations and read about each in the other three seasons. If all of your students are at third-grade reading level or higher, finding constellations and reading about each will not take long. If you need to adapt the lesson for time, have students only do one more season after “Spring.” When they are finished, ask: - Why do you think there are different constellations to find in each season? - (Students may have realized in the Motivation that the stars appear to move in the night sky. This conversation will emphasize that thought. If they haven’t realized this yet, ask, “Do you think that the same constellations are always in the same place?” If they do not know the answer, share with them that as the earth rotates each night, the stars appear to move. And as the earth revolves around the sun, different stars will be seen from month to month.) - Do you think that knowing where one constellation is in the sky can help you find others? - (Students may realize the concept of a map of the night sky. As they did the Star Search exercise, they could see where the constellations were in relation to one another. You may want to introduce the concept of a night-sky map by discussing how land maps work.) Students will now build their own evening sky map. Hand out the Making a Map of the Night Sky student sheet. Also hand out the Circular Map of the Sky and the Outer Sleeve that you printed from the Sky and Telescope site. There are instructions for students on their student sheets. They should be able to independently make their own sky maps. There is a step in which they need to put two staples into their map holders. You will want to either pass around several staplers or walk around to staple each student’s map holder. Now students should line up today’s date with 7 p.m. Assign students to do some star searching as homework (which you may change depending on the weather and day of the week). Students should find three constellations and record their observations about them. This assessment can follow the making of the sky map or it can follow the assignment to go star watching. Have students look at their evening sky maps, which should be set for a recent date and 7 pm. Have them turn to a later date at 7 pm. Then, again, have them move the wheel to a later date at 7 pm. For instance, they may move from December 14 to December 17 at 7 pm to December 21 at 7 pm. Ask them to describe what happens to the constellations. This is where you can ascertain if students notice that both the patterns (constellations) stay the same and that the stars appear to move in the night sky. Now send them to their student sheet where they will write their observations. You may use the discussion and their writing assignment to see if they understand what is stated as the purpose of this lesson. Extend the lesson by having students do a series of night-sky observations. Have students write down their observations for one week. They should note the date, time, and place for each observation. They can compare their observations. These Science NetLinks resources could be used to help extend the ideas in this lesson: - Lunar Cycle Calendar is a tool that includes printable calendars on which students can record their observations of the moon. It also provides illustrations of the phases of the moon. - Lunar Cycle Challenge offers an online activity in which students "drag" moons to their correct places in lunar cycles.
Traveling to Norway may not be in the cards for you right now, but you can still see Norway’s turquoise waters thanks to NASA. Currently, Norway’s Hardangerfjord, the fourth largest fjord in the world and a common tourist attraction in the country, is turquoise. No, not just a pretty blue. It has turned a turquoise so vibrant that it can be seen from space. On May 30 and June 12, NASA captured images of the fjord’s turquoise waters that are now viewable all the way from space. — NASA Earth (@NASAEarth) June 12, 2020 The water’s color is thanks to blooms of plankton called emiliania huxleyi that has grown so exponentially that the water’s color has changed. NASA explains that the shells of this plankton emit the color and that’s what has caused the shift in tone. While some species of microorganisms can be damaging to the environment by killing fish in the water, emiliania huxleyi is perfectly safe and only poses a threat to boaters who will have limited visibility. If you want to see the teal waters of Hardangerfjord, you can check out NASA’s Twitter. [Via House Beautiful]
Discover Language 1 Age : 9 to 24 months Language acquisition and human interaction are a basic human need. Learn how to support your child’s communication skills with these simple and fun activities that can be done at home or on the go. Discover the basic tools of the Montessori language curriculum and develop your child's love of learning. Whether you have a pre-verbal child or a budding reader these were created to support you and your child’s language development. 5 videos + E-Book, plus list of reading books This bundle includes: • Video #1: Three-period language lesson with fruits • Video #2: Three-period lesson with animals • Video #3: Matching objects to similar cards • Video #4: Help your child refine language and sounds • Video #5: Reading phonetic • List of reading books • Language sequence from 3 to 6 years
Meditation is a technique which gives a unique quality of rest to mind and body. It allows stress and tiredness to be released in a natural way, resulting in greater energy, clarity and enjoyment of life. There are many definitions of meditation, but the truth is that any attempt to define meditation with words falls short of truly explaining the practice. Harold Bloomfield M.D. has a short, simple definition that is useful to start with. He writes “Meditation quiets our usually busy consciousness. Through physically and mentally stilling the mind, we achieve heightened mental clarity”. In the end, it’s only by meditating yourself that you really understand what it means to meditate. What is the purpose of meditation? Traditionally, meditation was (and still is) used for spiritual growth. Meditation can bring on increased awareness, greater ability to live in the moment, freedom from the ego, union with God or the universe, inner peace and many other spiritual benefits. More recently, meditation has become a valuable tool for finding a peaceful oasis of relaxation and stress relief in a demanding, fast-paced world. After meditating, you will feel more relaxed, calmer and have a greater sense of well-being. You will notice that your reaction to stressful events changes and that you act with greater control and in more constructive ways. Meditation has a cumulative effect; after a time you will find that the qualities of calmness, inner-peace and tranquility will integrate with all aspects of your life. Thousands of research studies indicate that meditating for as little as fifteen or twenty minutes a day promotes improved mental and physical health and well-being. - Improved mental health - Less stress - Increased creativity - Less anxiety and depression - Improved memory - Greater emotional stability - Increased learning ability - Better concentration and focus - A sense of fulfilment and purpose - Improved self-confidence - Increased spirituality - Greater awareness and intuition - More energy - Reduction of stress-related illnesses - Lower blood pressure - Boosted immune system - Better sleep patterns - Faster recovery - Improved co-ordination - Increased stamina - Reduced pain - Improvement in heart and artery health - Better motor skills - Increased brain function How long does it take to learn to meditate? The Ancients had no problem in saying that learning to meditate was a long and arduous process. Using traditional techniques, it can take many years of difficult practice before you gain control of your mind and attain meditative states. It is unfortunate that most modern materials concerning meditation still put forward traditional methods, but portray them as easy and quick to master. This accounts for the high proportion of failures among people trying meditation – either in terms of those who get little out of their efforts or those who abandon meditation completely. Krishnamurti said that “The greatest advance in spiritual advancement will come about with the integration of Eastern and Western knowledge”. Meditation Machines represent one aspect of this integration by using Western technology to enable people to meditate quickly and easily. With a Meditation Machine, you can start to achieve the same deep, mental state as a Zen monk. But instead of taking years of practice, this will happen in your first session – automatically and effortlessly. How long do you need to spend meditating? You do not need to spend a great deal of time in meditation for it to start having positive effects in your life. Just meditating for 15 or 20 minutes, three or four times a week will give you the benefits of meditation. If you are able to devote more time to your meditation practices, then obviously, you will derive even greater benefits. Before long, you might find yourself enjoying meditation so much that you will make the effort to sit longer in meditation or to meditate more often. There is no ‘best’ amount of time to be spent in meditation. Everyone is different and you will soon find the pattern that is best for you. Some people might like to have a single 30 minute session either in the morning or the evening. Other people may find that two shorter sessions, say 15 minutes each, at the start and end of the day suits their routine and circumstances. You’re the one who knows what will fit with your schedule and what will work best for you. Whatever you decide, if you begin to meditate and stick with it, you will soon feel a reduction in stress, an improvement in your ability to relax, a general overall feeling of better health and well-being, and many other physical and psychological benefits. What does it feel like to meditate? There is no way to know in advance what meditation will feel like for you – only that it will be a positive experience. Meditation is as individual as you are. It feels different to individual people and sensations range across a broad spectrum of reactions and feelings. Amongst the reactions often reported by people starting out with meditation are: - peaceful and relaxed - quietly positive and energised - ecstatic and tingling - surging mental and physical energy - gently floating - connection to a higher source - a feeling of oneness One thing with which just about everyone agrees is that after meditating you will feel more relaxed, calmer and have a greater sense of well-being.
I hadn't heard of the phrase 'legacy systems' before, and apparently it's most often used in relation to the computer world. However, according to Wikipedia, it may also be used to describe human behaviors, methods, and tools. For example, timber framing using wattle and daub is a legacy building construction method. In other words, the phrase applies to the use of old methods (or in computerese, old programming systems) that are still functioning alongside more recent ones. We tend to think that the computer world continues to upgrade at every point - certainly for us end-users that seems to be mostly the case - but for those who work in the basement, as it were, where all the programming is actually done, it seems there are a host of possibilities. Programmers can continue to use old ways of programming alongside new ones without blinking an eyelid. But it has some issues, which is why places will advertise legacy system programmer jobs these are for people who know how to program using the old systems. It's kind of like having an old tradesman on the job with you who can do things that the modern machine doesn't do nearly as well. The Wikipedia article tells us that NASA still uses a lot of 1970s technology because it's too expensive to replace. But NASA's not alone in this: many companies would find it prohibitively expensive to replace their computer systems. So they live with the consequences. I sometimes think I'm a person in the legacy systems mode: I much prefer to play the piano when there's a choice between a piano and a keyboard. For me, so far, there hasn't been a keyboard that's successfully duplicated the touch and feel and weight and much more of an actual piano. But then I'm a luddite in all sorts of things...!
Red, green and white — the colors that symbolize Christmas — are reflected each year in holiday staples such as Santa Claus, Christmas trees and snowmen, but how did they become associated with the most wonderful time of the year? The origin of the holiday’s palette supposedly, comes from the colors of Victorian churches in England. They have also been linked to Medieval times, as people during the 15th and 16th centuries decorated their local churches with these colors due to alchemist and astrologer’s influence. What do these colors mean? Many ancient churches have a wooden structure, called the “quire,” located between the church’s navel and the sanctuary. The parishioners’ taxes built these quires, decorating them with artistic statues and marble reliefs. Many of these works of art represented local saints and combined several colors, mostly green, red, blue, and gold. Among the different symbols used, medieval churches used blue to represent water and gold to represent fire, two ancient alchemical elements. A golden tint also could signify the divinity of local saints, whose statues carried candles, rosaries or other decorative holy items. Unfortunately, many quires in East Anglia, England, lay damaged or destroyed. Due to the 16th century English Reformation, believers removed many customs considered Catholic. Protestant parishioners broke and altered the churches’ statues in clear defiance of the Catholic Church. Many figures lost their faces, had eyes removed or endured scrapping. The red color featured in such works maintained over the centuries, but those colored green or gold became darker due to the neglect. The East Anglian parishioners’ custom of sitting every week in front of wooden panels painted red and green survived until the 19th century. Hence, the Victorian-era Englishmen were particularly familiar with these colors. So, they took advantage of their traditions when rebuilding the dilapidated churches using the aforementioned color palette. The recoloring did not stop at just the churches, as houses shared the vibrant scheme in the form of the Christmas Tree ornaments. “People usually want red and green color schemes to decorate their houses during Christmas. However, in recent years, silver and golden ornaments usage has skyrocketed, as people use them to add a more special, refined touch. Many customers ask for red ornaments as it alludes to Santa Claus, yet, since Christmas wreaths are green with red flowers, green is also vastly used,” said Jazmin Alarcon Moreno, an interior decorator. “I can boldly state that although the Christmas colors are green and red, different colors have appeared in recent years. However, the original palette does not go out of fashion. Santa Claus, the reindeer, Christmas Eve, and the Christmas Tree are the most important elements for a Christmas decoration,” said Alarcon Moreno. Christmas is a time of joy, and the decorations used to adorn the houses generate that holiday glee. While there are a lot of myths surrounding its color pallette — such as the one about Santa Claus and Coca-Cola sharing the same red — the tales do not change the merry celebrations we all experience during this time of the year. (Translated and edited by Mario Vazquez. Edited by Carlin Becker.) The post “Berry” Christmas: How Red, Green, and White Became the Colors of Christmas appeared first on Zenger News.
There are two main categories of reactions in cell chemistry. CATABOLIC reactions BREAK DOWN food molecules and release energy and chemicals for further use. ANABOLIC reactions BUILD UP complex molecules required by the cell. Anabolic reactions depend on catabolic reactions to provide energy and materials. METABOLISM is the working together of catabolic and anabolic reactions. In most cases the energy needed to make cells work is provided directly or indirectly by sunlight. Some bacteria are able to obtain energy from chemicals they encounter in their environment. These bacteria are an example of the pieces of the ‘jigsaw’ that are not seen in the ‘big picture’. In plants light energy changed to chemical energy donates the energy needed for anabolic reactions. The chemical building blocks needed for construction come from mineral salts and water in the soil. Inside cells energy is released from food molecules by chemical reactions. These take place in an ordered sequence beginning in the liquid cytoplasm and then in two different parts of the mitochondria. Mitochondria are organelles suspended in the cytoplasm of the cell. All higher forms of life depend on sunlight providing energy, to drive the anabolic reaction called photosynthesis.
This course mainly focuses on fire prevention and precaution, and how to assess the potential risks from fire within the workplace; these being the fundamental requirements set out in the Regulatory Reform (Fire Safety) Order 2005 legislation. Participants of the course will gain sufficient basic knowledge in using appropriate fire-fighting equipment, evacuation procedures, fire checks and inspections, and the importance of limiting false alarms in the workplace. This course will cover: • Current legislation • Fire risk assessments • Fire prevention • Fire detection • Limiting the spread of a fire
A new computer program can generate an infinite amount of original music. A songwriter with no soul. I'm Bob Hirshon and this is Science Update. The composer of this music is only a couple years old. It's a computer program, designed by scientists at the University of Grenada in Spain. Computer scientist Miguel Molina says their artificial intelligence system can crank out infinitely long, spontaneous compositions that suit particular moods. We have different melodic patterns, different rhythms, different instrument combinations, for every kind of emotion. He says the program could make background music cheaper for businesses like department stores by eliminating the need for licensing fees. But he also thinks the program could be used by human composers, inspiring them with new ideas when they're suffering from writers' block. I'm Bob Hirshon for AAAS, the Science Society. Making Sense of the Research Computers have been used in making music for decades now. Today, they're an indispensable element of hip-hop, rap, electronica, and many other kinds of popular music. They can imitate any instrument, generate drum loops, and mix and match elements like samples on top of new material. However, all these kinds of computerized music require a human programmer to tell the computer exactly what to do. In these respects, the computer is really like a conventional instrument. In contrast, Molina's computer program (called Inmamusys, short for Intelligent Multiagent Music System) actually puts music together on its own. Although human programmers loaded Inmamusys with all kinds of information about music theory, the computer doesn't need a programmer to create a new tune. It just spontaneously noodles around within the guidelines it's been given, resulting in music that no person can claim to have created. Inmamusys' compositions aren't likely to win any Grammys or score any top-40 hits. Without a human artist behind them, they can't convey a specific point of view or deeply felt emotion. Instead, they sound like pleasant, non-descript background music that you might hear in a store, or while you're on hold on the telephone. In fact, that's the type of setting that Inmamusys was created for. You see, even though you don't really pay attention to music you hear in these situations, the company that's using the background music has to pay a royalty to the composer. For no more than an up-front fee to purchase the software, Molina's system could generate an infinite amount of background music free and clear of all royalties, potentially saving big bucks. Of course, human musicians may have reason to worry about Inmamusys, since some musicians get a surprisingly large chunk of their income from Muzak versions of their radio hits, or background music that they compose to order. On the other hand, Molina suggests that the software could be a help when a human composer gets stuck writing a new song: perhaps by providing a random hint as to where the song could go next. If the inspiration pays off, the musician could use it without fear of getting sued by his or her virtual collaborator. Now try and answer these questions: - What is Inmamusys? How does it differ from other kinds of musical computers? - How is the computer program like a human composer? How is it different? - Do you think computer programs like these could someday compete with human songwriters? Why or why not? You may want to check out the June 19, 2009 Science Update Podcast to hear further information about this Science Update and the other programs for that week. This podcast's topics include: prairie dogs sound the alarm, turning bed bugs against themselves, bird songs vary by climate, and improving forensic voice comparison. In the National Geographic News article "Robot Scientist" Equal to Humans at Some Tasks, read about a "robot" that can formulate hypotheses, design experiments, and interpret results on par with the best of its human counterparts.
This volume investigates the features and challenges of medical discourse between medical professionals as well as with patients and in the media. Based on corpus-driven studies, it includes a wide variety of approaches including cognitive, corpus and diachronic linguistics. Each chapter examines a different aspect of medical communication, including the use of metaphor referring to cancer, the importance of ethics in medical documents addressed to patients and the suitability of popular science articles for medical students. The book also features linguistic, textual and discourse-focused analysis of some fundamental medical genres. By combining sociological and linguistic research applied to the medical context, it illustrates how linguists and translation specialists can build bridges between health professionals and their patients.
- Historic Sites Pentecost In The Backwoods Shocking, exuberant, exalted, the camp meeting answered the pioneers' demand for religion and helped shape the character of the West. June 1959 | Volume 10, Issue 4 A holy frenzy seemed to have taken hold of the West. Throughout the frontier communities, the ecstasy of conversion overflowed into the nervous system. At Cane Ridge, and at a hundred subsequent meetings, the worshipers behaved in ways that would be unbelievable if there were not plenty of good testimony to their truth. Some got the “jerks,” a spasmodic twitching of the entire body. They were a fearful thing to behold. Some victims hopped from place to place like bouncing balls. Sometimes heads snapped from side to side so rapidly that faces became a blur, and handkerchiefs whipped off women’s heads. One preacher saw women taken with the jerks at table, so that teacups went flying from their hands to splash against log walls. Churchmen disagreed about the meaning of these symptoms. Were they signs of conversion? Or demonstrations of the Lord’s power, meant to convince doubters? Peter Cartwright, a famous evangelist of a slightly later era, believed the latter. He told of a skeptic at one of his meetings who was taken with the jerks and in a particularly vicious spasm snapped his neck. He died, a witness to the judgment of Omnipotence but gasping out to the last his “cursing and bitterness.” Besides the jerks, there were strange seizures in which those at prayer broke into uncontrollable guffaws or intoned weird and wordless melodies or barked like dogs. It was wild and shaggy, and very much a part of life in the clearings. Westerners wanted to feel religion in their bones. In their tough and violent lives intellectual exercises had no place, but howls and leaps were something that men who were “half-horse and half-alligator” understood. It was natural for the frontier to get religion with a mighty roar. Any other way would not have seemed homelike to people who, half in fun and half in sheer defensiveness, loved their brag, bluster, and bluff. Yet there was something deeper than mere excitement underneath it all. Something fundamental was taking place, some kind of genuine religious revolution, bearing a made-in-America stamp. The East was unhappy with it. For one thing, camp-meeting wildness grated on the nerves of the educated clergy. All of this jigging and howling looked more like the work of Satan than of God. There were ugly rumors, too, about unsanctified activities at the meetings. Some candidates for salvation showed up with cigars between their teeth. Despite official condemnation, liquor flowed free and white-hot on the outskirts of the gatherings. It might be that corn did more than its share in justifying God’s ways to man. Then there were stories that would not down which told how, in the shadows around the clearing, excited men and women were carried away in the hysteria and, as the catch phrase had it, “begot more souls than were saved” at the meeting. All these tales might have had some partial truth, yet in themselves they did not prove much about frontier religion. As it happened, a part of every camp-meeting audience apparently consisted of loafers and rowdies who came for the show and who were quite capable of any sin that a Presbyterian college graduate was likely to imagine. Yet it was not the unscrubbed vigor of the meetings that really bothered conservatives in the Presbyterian Church. Their fundamental problem was in adjusting themselves and their faith to a new kind of democratic urge. Enemies of the revivals did not like the success of emotional preaching. What would happen to learning, and all that learning stood for, if a leather-lunged countryman with a gift for lurid word pictures could be a champion Salvationist? And what would happen—what had happened—to the doctrine of election when the revival preacher shouted “Repent!” at overwrought thousands, seeming to say that any Tom, Dick, or Harry who felt moved by the Spirit might be receiving the promise of eternal bliss? Would mob enthusiasm replace God’s careful winnowing of the flock to choose His lambs? The whole orderly scheme of life on earth, symbolized by a powerful church, an educated ministry, and a strait and narrow gate of salvation, stood in peril. Nor were the conservatives wrong. In truth, when the McGreadys and Stones struck at “deadness” and “mechanical worship” in the older churches, they were going beyond theology. They were hitting out at a view of things that gave a plain and unlettered man little chance for a say in spiritual affairs. A church run by skilled theologians was apt to set rules that puzzled simple minds. A church which held that many were called, but few chosen, was aristocratic in a sense. The congregations of the western evangelists did not care for rules, particularly rules that were not immediately plain to anyone. In their view, the Bible alone was straightforward enough. Neither would they stand for anything resembling aristocracy, whatever form it might take. They wanted cheap land and the vote, and they were getting these things. They wanted salvation as well—or at least free and easy access to it—and they were bound to have that too. If longer-established congregations and their leaders back east did not like that notion, the time for a parting of the ways was at hand. In politics, such a parting is known as a revolution; in religion, it is schism. Neither word frightened the western revivalists very much.
A detail of the Artemesium bronze believed to represent Zeus (or Poseidon), 460 BCE. (National Archaeological Museum, Athens). Image courtesy of Robert Consoli of SquinchPix, republished with permission under a content sharing agreement. Original image by Robert H.Consoli. Uploaded by Mark Cartwright, published on 25 May 2013 under the following license: Copyright. You cannot use, copy, distribute, or modify this item without explicit permission from the author. Recommended for you Many thanks to the companies who are kindly helping us:
Bird Flu Now Human Killer Dutch Vet Dies of Fowl Plague Pneumonia WebMD News Archive April 23, 2003 -- It's not as scary as SARS -- yet. But world health experts are keeping a nervous eye on the Netherlands, where an out-of-control bird flu recently killed its first person. So far, 82 people -- all in the Netherlands -- have come down with the bird virus. Three of them caught it from another person. The virus causes pinkeye in most people, but some get flu symptoms. One was a 57-year-old veterinarian. He developed double pneumonia and died. Albert D.M.E. Osterhaus, PhD, DVM, studies new viruses at the Institute of Virology, Erasmus University, Rotterdam, Netherlands. His lab was the first to prove that the SARS virus is the sole culprit in the ongoing SARS outbreaks. Even though he's continuing to study SARS, he keeps an eagle eye on bird flu. It's known to science as highly pathogenic avian influenza or HPAI. "The chance of something awful happening from HPAI is very small," Osterhaus tells WebMD. "But even a very small chance of something very terrible is something to take seriously. If this would lead to pandemic influenza, that would have major consequences." This is what keeps world health experts up at night, says medical epidemiologist Marjorie P. Pollack, MD. Pollock is a disease-surveillance monitor for the International Society of Infectious Diseases. "The big concern for a pandemic is getting a new flu strain that nobody already has immunity against," Pollack tells WebMD. "With current human flu viruses you see shifts and drifts that let them infect some people who've had it before, but usually you see strains that a significant number of people remain protected against." The bird flu is sweeping Netherlands and has broken into Belgium. In their effort to contain the virus, authorities have killed some 10 million chickens. It's not a new virus, but it's usually not very deadly. In fact, a less deadly outbreak in eastern Connecticut is being contained with a mass vaccination program. A similar, relatively harmless virus likely came to Netherlands in migrating ducks or geese. Somehow, it mutated into HPAI or what used to be called fowl plague.
Let’s Tie the Knot: Marriage of Complement and Adaptive Immunity in Pathogen Evasion, for Better or Worse - Department of Medical Microbiology, University Medical Center Utrecht, Utrecht, Netherlands The complement system is typically regarded as an effector arm of innate immunity, leading to recognition and killing of microbial invaders in body fluids. Consequently, pathogens have engaged in an arms race, evolving molecules that can interfere with proper complement responses. However, complement is no longer viewed as an isolated system, and links with other immune mechanisms are continually being discovered. Complement forms an important bridge between innate and adaptive immunity. While its roles in innate immunity are well-documented, its function in adaptive immunity is less characterized. Therefore, it is no surprise that the field of pathogenic complement evasion has focused on blockade of innate effector functions, while potential inhibition of adaptive immune responses (via complement) has been overlooked to a certain extent. In this review, we highlight past and recent developments on the involvement of complement in the adaptive immune response. We discuss the mechanisms by which complement aids in lymphocyte stimulation and regulation, as well as in antigen presentation. In addition, we discuss microbial complement evasion strategies, and highlight specific examples in the context of adaptive immune responses. These emerging ties between complement and adaptive immunity provide a catalyst for future discovery in not only the field of adaptive immune evasion but in elucidating new roles of complement. The complement system is an evolutionarily conserved branch of innate immunity, which is comprised of a proteolytic cascade of numerous proteins in serum. Complement is historically regarded as a first line of defense against invading pathogens, and activation of the enzymatic cascade leads to their rapid recognition and elimination. Activation of this cascade leads to leukocyte chemotaxis, phagocytosis, and direct cell lysis via the MAC. While complement has long been regarded as an isolated immunological pathway, mounting recent evidence ties complement to many other physiological processes. Complement is considered an important link between innate and adaptive immunity. While its roles in innate immunity have been well-characterized, we are only beginning to understand the complex “crosstalk” between complement and the adaptive immune system. Historically, complement’s only established role in adaptive immunity was its function in humoral antibody response, in which complement opsonization reduces the B cell activation threshold in response to antigens, subsequently leading to increased antibody production. Since the initial discovery more than 40 years ago, many new roles have been described, including antigen capture by FDCs and antigen presentation. In the last decade, numerous studies have elucidated direct roles of complement in T cell immunity. Moreover, new mechanisms are continually being identified, suggesting an increasingly crucial role of complement in directing adaptive immune responses. While the human immune system is highly evolved to mount an effective response to nearly any encountered threat, invading pathogens do not give up without a fight. In a struggle for survival, pathogens and their human hosts are engaged in a perpetual arms race. Indeed, many immune evasion strategies have been discovered in recent decades, and much work has focused on complement evasion by human pathogens (reviewed in Lambris et al., 2008; Stoermer and Morrison, 2011; Ricklin, 2012; Zipfel et al., 2013; Merle et al., 2015b; Garcia et al., 2016). Many of the goals of pathogenic complement evasion are clear, particularly inhibition of phagocytosis and direct killing by MAC. However, the role of complement in directing adaptive immune responses against pathogens is only beginning to be understood, and how pathogens circumvent these processes is less clear. Herein, we describe how pathogens modulate adaptive immunity through complement regulation. We highlight the known mechanisms by which complement drives adaptive immune responses, and summarize evasion strategies used by pathogens to direct adaptive immunity through complement engagement. Furthermore, we postulate generalized mechanisms that pathogens can employ to subvert complement-mediated B and T cell responses. Complement activation proceeds via three pathways (classical, lectin, and alternative) that, while activated via different mechanisms, all lead to cleavage of C3, the central component of the complement cascade, and the effector functions described above (Figure 1). These mechanisms are briefly described here, but are reviewed in detail elsewhere (Merle et al., 2015a). The CP is initiated by antigen-bound IgG or IgM, which allows binding of the C1 complex (C1qrs). The protease component, C1s, then cleaves C4 and C2, to form a C3 convertase (C4b2a) on a target surface. The LP is activated in a similar manner, but rather than relying on antibodies, MBL recognizes carbohydrate patterns conserved among microorganisms. This binding event activates MASPs, which, similarly to C1s, cleave C4 and C2 to form C4b2a. The AP is spontaneously activated by hydrolysis of C3 to form C3(H2O), which can form a complex with FB. Upon cleavage by FD, a C3 convertase is formed [C3(H2O)Bb], which like C4b2a, can mediate cleavage of C3 into C3a and C3b. The newly formed C3b molecule can also form an AP C3 convertase enzyme (C3bBb) similarly to C3(H2O). Upon cleavage by C3 convertase enzymes, C3 undergoes a large conformational change, whereby its intrinsic thioester becomes exposed and can quickly react with molecules on cell surfaces. This reaction mediates covalent cell-surface attachment of the activated cleavage product C3b. Since C3b comprises the C3 convertase of the AP, C3b deposition initiates an amplification loop, which leads to rapid opsonization of the target cell surface. Opsonization by C3b leads to several functional outcomes. High densities of C3b on cell surfaces can lead to formation of C5 convertase enzymes, which cleave C5 into C5a and C5b. This event initiates formation of MAC, which can directly mediate cell lysis. In addition, the ATs C3a and C5a are involved in chemoattraction of leukocytes during infection, and have been shown to play emerging roles in regulating T cell immunity (discussed in detail below). FIGURE 1. Complement activation, regulation, and inhibition. The three activation pathways are shown near the corresponding yellow-shaded boxes (classical, lectin, and APs). These pathways lead to C3b and C4b deposition, convertase, and MAC formation on the target cell surface. Breakdown mediated by complement regulators leads to iC3b and C3dg formation. Complement receptors and regulators are depicted on the host cell surface. Interactions are indicated by dashed black arrows, and enzymatic cleavage events are indicated by solid black arrows. Proteins listed in red boxes are immune evasion molecules that inhibit the complement interactions indicated by the solid red arrow and red “X”. It is unknown whether MP60 (in black font) inhibits the C3d-CR2 interaction, although it binds to C3d. A red inhibitory line indicates inhibition of protein expression. Proteins listed in green boxes with dashed green arrows bind to the indicated complement protein, and those with solid green arrows facilitate the indicated cleavage event. Complement also has numerous regulatory mechanisms to control complement activation, particularly on host cells where complement activation is undesirable. Indeed, many host cells express CD55 (DAF), which dissociates C3 convertases and prevents further amplification. In addition, CD46 (MCP) and CR1 (CD35) bind to C3b and serve as cofactors for FI to cleave C3b to iC3b, which is no longer able to form active convertase enzymes. Thus, complement activation and amplification are slowed. These regulatory events are also mediated by FH, a soluble complement regulator that binds to anionic markers on host cell surfaces. Likewise, C4BP, a soluble regulator of the CP and LP, facilitates FI-mediated cleavage of C4b. Breakdown of C3b plays an important role in pathogen clearance and downstream adaptive immune response. C3b is recognized by CR1, which is involved in clearance of target cells and immune complexes from the bloodstream. Concomitantly, CR1 facilitates cleavage of C3b to iC3b [and eventually to C3d(g)], which can also interact with several CRs on cell surfaces (including CR2, CR3, and CR4), participating in a variety of functions including phagocytosis, antigen presentation, and lymphocyte stimulation. Complement Regulation of Adaptive Immunity C3d-CR2 Interaction and B Cell Immunity While C3b opsonization is necessary to amplify complement activation via the AP, C3b plays a critical role in pathogen clearance. Erythrocytes bearing CR1 recognize and bind C3b-opsonized pathogens, and mediate transport of these complexes to the liver and spleen for clearance (Merle et al., 2015b). Transport may also be mediated by glycoprotein Ib (GPIb) on platelets, which captures C3b-opsonized pathogens and enhances adaptive immune responses via sDCs (Broadley et al., 2016). In the liver, the opsonized pathogens are carried to specialized Kupffer cells bearing CRIg, which can also bind C3b and mediate phagocytosis (Helmy et al., 2006). In the spleen, pathogens are transported to lymphoid follicles, which are protected by resident macrophages and DCs in the subcapsular sinus. Meanwhile, erythrocyte CR1, in the presence of FI, mediates cleavage of C3b to iC3b and/or C3d, which is recognized by CR2 and CR3. The subcapsular sinus macrophages and DCs can bind and internalize complement-tagged antigens and intact target cells, respectively, via CR3, and translocate them to naïve B cells or FDCs expressing CR2 in the lymphoid follicle (Merle et al., 2015b) (Figure 2A). FIGURE 2. Crosstalk between complement and adaptive immunity. (A) Depicts the entry of C3d-coated immune complexes into the lymph nodes via the afferent venule. These complexes will associate with CR3 on subcapsular macrophages. These complexes will move from the apical side to basolateral where the immune complex is transferred to CR2 on the surface of naïve B cells within the GC, and handed off to FDCs. In the final stages, this complex is then presented to affinity matured GC B cells for T cell dependent responses. (B) Highlights the molecular adjuvant affect of C3d. In low antigen conditions, antigen engagement of BCR alone is not sufficient to promote associated Igα/β signaling and B cell activation. Antigen-bound C3d will bind CR2, part of the B cell co-receptor complex (together with CD19 and CD81), and coligation of BCR and the B cell co-receptor complex amplifies the molecular signal that leads to downstream B cell activation. (C) Illustrates the role of locally and intracellularly synthesized complement components on T cell homeostasis and priming. Intracellular C3 is cleaved by CTSL into biologically active C3a and C3b. Intracellular C3a/C3aR association is important for T cell survival. Extracellular C3a/C3aR and C5a/C5aR association influence Th1 lineage commitment, and in their absence, can lead to iTreg induction. The extracellular association of C3b with its receptor CD46 on T cells influences the metabolic state of the T cell, shifting it from a metabolically quiescent state to an active state (through increased GLUT1, LAT1, and LAMTOR expression). Costimulation of CD46 upon activation of TCR influences the expression of NLRP3 and results in cleavage of intracellular C5. Extracellular C5aR2 activation provides signals that inhibit the activation of the NLRP3 inflammasome leading to increased IL-10 production and Th1 contraction. Paradoxically, intracellular C5aR1, upon activation, leads to activation of the inflammasome resulting in Th1 induction. (D) Highlights the role of complement in activation of APCs and their crosstalk with T cells. Activation of extracellular C3a/C3aR and C5a/C5aR results in the activation of the APC. APC generated C3a and C5a can bind to the AT receptors on T cells to induce a Th1 response. Absence of C5aR on DCs leads to the induction of Th17 and Tregs. Extracellular stimulation of APCs, primarily DCs, with complement soluble regulators C4BP and FH leads to a tolerogenic state of the DCs and results in the induction of Tregs. Crosstalk between C5aR and TLR2/4 or CR3 and TLR2/4 on APC surface results in diminished IL-12 production and downstream Th1 response. Finally, C1q and its receptors are crucial for the silent removal of apoptotic cells (i.e., provides pro-efferocytic signals) and this can influence T cell response. Within the lymphoid follicle, C3d mediates several functions that are critical mediators of B cell immunity, which have been reviewed previously in great detail (Carroll, 2004; Rickert, 2005; Bergmann-Leitner et al., 2006; Toapanta and Ross, 2006; Carroll and Isenman, 2012) (Figures 2A,B). Indeed, the roles of complement in the humoral antibody response have long been established. The first role involves acting as molecular adjuvant, effectively lowering the threshold for B cell activation (Carter et al., 1988) (Figure 2B). Crucial to C3d’s function in B cell mediated antibody response is its association with CR2. CR2 is part of the B cell co-receptor complex, together with CD19, CD81, and Leu13 (Matsumoto et al., 1991). Upon simultaneous engagement of both the BCR and the B cell co-receptor complex, the antigen threshold for B cell activation is reduced by up to four orders of magnitude (Carter and Fearon, 1992; Dempsey et al., 1996). In addition, BCR and B cell coreceptor crosslinking also enhances antigen processing and presentation to T cells (Cherukuri et al., 2001). Thus, C3d is especially critical for B cell stimulation in conditions where antigen concentration is low, as is typically the case during pathogen clearance (Carroll and Isenman, 2012). In addition to C3d’s direct role on B cells, it has been found that C3d-tagged immune complexes interact with FDCs to promote a more potent humoral response. Expression of CR1 and CR2 on FDCs promotes antigen reservoirs through binding and sequestering C3d-coated immune complexes in lymphoid follicles (Fischer, 1998). These immune complexes are cycled through non-degradative endosomal compartments within FDCs, which preserve the antigen for long periods of time (Heesters et al., 2013). These events promote the development of GCs, which is critical for maintenance of memory B cell repertoires (Fischer, 1998). Finally, C3d opsonization promotes extended half-life of antigens in blood, further facilitating generation of adaptive immune responses (Bergmann-Leitner et al., 2006). Another mechanism of B cell regulation is mediated by C4BP. This regulator attaches to negatively charged moieties, including phospholipids of host membranes and other structures on bacterial surfaces (Blom et al., 2004). Beyond its complement inhibitory function, C4BP has been identified to specifically bind CD40, which is crucial for B cell activation and function. CD40, upon engagement with its nascent ligand CD40L, induces B cell proliferation, rescue of GC B cells from apoptosis, and CSR. Brodeur et al. (2003) found that C4BP by binding to CD40 could drive B cell activation, similar to what is seen by canonical CD40-CD40L interaction. A more recent study suggested that C4BP could also form a stable complex with soluble CD40L, and this complex upon association to CD40 provided signals that promoted cell survival without influencing proliferation (Williams et al., 2007). Unfortunately, the effect of this newly identified C4BP/CD40L complex was only observed for epithelial cell survival. Thus, future examination of these associations is required to fully elucidate the effects that C4BP has on lymphocyte biology and the adaptive immune response. Intrinsic T Cell Regulation by Complement As a doctrine, complement has long been thought as a serum restricted system and its over 30 proteins could only be synthesized by the liver; however, this view is rapidly changing. It has now been identified that many disparate locations, and tissues in body have local sources of complement (for more detail discussion see, Kolev et al., 2014). Moreover, almost all cell types in the human body can produce complement proteins and many of those even contain intracellular complement stores (Liszewski et al., 2013). Interestingly, both locally synthesized and intracellular complement have proven to be important for CD4+ T cell survival, proliferation, and differentiation (Heeger et al., 2005; Strainic et al., 2008; Liszewski et al., 2013; Kolev et al., 2015; Arbore et al., 2016). It was first identified by Heeger et al. (2005) that both naïve CD4+ T cells, as well as their cognate APC partner, could locally synthesize AP complement components C3, FB, and FD, and later it was shown C5 was also synthesized (Strainic et al., 2008). Local synthesis of these proteins was associated with a decrease in CD55, further enhancing AP activation (Heeger et al., 2005; Strainic et al., 2008). Interestingly, both the AT receptors C3aR and C5aR were found to be upregulated on both the T cell and APC during costimulation. Additionally, C3aR/C5aR activation on T cells provided signals to both induce IL-12 receptor expression and produce cytokines IL-12 and IFN-γ (Strainic et al., 2008) (Figure 2C). Given that IFN-γ supports the lineage commitment of naïve T cells to T helper 1 cells (Th1), this suggests that autocrine AT receptors are important in this process. In line with this notion, it was found that in the absence of autocrine T cell C3aR/C5aR signaling resulted in CD4+ T cells to commit to a Foxp3+ inducible regulatory T cell lineage (iTreg) (Strainic et al., 2012) (Figure 2C). Excitingly, a new study linking autocrine AT receptor T cell signaling to another innate immune pathway was published. This study highlighted the role of C5aR in NLRP3 inflammasome activation, and found that these associations were tied to Th1 induction (Arbore et al., 2016) (Figure 2C). Upon TCR ligation and CD46 costimulation, intracellular C5 was cleaved and C5aR1 provided signals to drive NLRP3 inflammasome activation, which resulted in Th1 induction (Arbore et al., 2016). Surface-restricted C5aR2 was found to be a negative regulator of this process, promoting a regulatory T cell phenotype (Figure 2C). Thus, this study further strengthens the notion that autocrine C3aR/C5aR signaling is vital in skewing T cell differentiation and lineage commitment. Many subsequent studies have further strengthened the role of complement on T cell survival, proliferation and differentiation. One study highlighted the potential importance of CD46 CTY-1 isoform in T cell regulation (Figure 2C). Coengagement of both TCR and CD46 in the presence of increasing IL-2 concentration induces IL-10 producing regulatory T cells (Cardone et al., 2010). Another recent study showed that intracellular C3 was cleaved into biologically active C3a and C3b by intracellularly expressed protease CTSL (Liszewski et al., 2013) (Figure 2C). Together these studies highlight the crucial role that local and intracellular complement plays in T cell biology, and opens the possibility that local complement is differentially regulated compared to serum complement. Recently, studies have begun to unravel more unexpected roles for complement in driving fundamental T cell processes. Before activation, naïve CD4+ T cells are metabolically quiescent. Important for expansion and differentiation, these naïve cells must undergo major reprogramming such that they increase their nutrient uptake and usage of metabolic pathways. Interestingly, autocrine C3b stimulation of CD46, and specifically its intracellular domain CTY-1, was important for this metabolic change and for production of the proinflammatory cytokines needed for Th1 differentiation (Kolev et al., 2015) (Figure 2C). This study speaks to the fundamental nature that local complement plays on T cell biology. Remarkably, and in line with this notion, Notch family member Jagged1 was identified as the third physiologically relevant ligand for CD46 (Le Friec et al., 2012). The authors proposed that CD46 could sequester Jagged1 away from Notch to induce IFN-γ secreting Th1 cells. Altogether these aforementioned studies indicate that CRs (C3aR/C5aR, CD46) and their ligands are crucial regulatory components of T cell lineage commitment and survival. APC-Mediated Regulation (Paracrine) Complement also plays an important role at the immunological synapse between APCs and T cells (Figure 2D). APCs (i.e., DCs) have been shown to express many of the same complement components as T cells, which also function in a similar fashion (Peng et al., 2008; Li et al., 2012). Extracellularly generated C3a/C5a was linked to increased expression of C3aR/C5aR on DCs, which increased both their activation [apparent by increased expression of costimulatory molecules (CD86 and MHCII)] as well as their costimulatory capacity (increased Th1 cytokine production) (Strainic et al., 2008; Li et al., 2012). Interestingly, blockade of the C3aR/C5aR signaling axis on DCs, such that extracellularly generated C3a/C5a could no longer bind, resulted in diminished costimulation capacity, apparent by a decrease in IFN-γ producing cells (Strainic et al., 2008). Independently, it was observed that using C5aR1 deficient DCs, but not C5L2, supported the induction of both Treg and Th17 lineages (Figure 2D). As in T cells, the two C5aR receptors and potentially C3aR seem to have independent roles in DCs (Weaver et al., 2010). These studies suggest that local complement generation by DCs provides important paracrine effects for T cell stimulation. Beyond the AT receptor signaling axis, other complement regulators have also been shown to be important for DC differentiation/activation and downstream T cell response. Complement activation is under tight control by both membrane bound (CD46, CD55) and soluble (C4BP, FH) regulators. Two independent studies found that both FH and C4BP [β-chain lacking isoform(β-)] influence the early stages of monocyte to DC differentiation, and promote a tolerogenic/immature and anti-inflammatory DC phenotype (Olivar et al., 2013, 2016). Moreover, these DCs exhibited decreased production of proinflammatory Th1 cytokines, and increased production of IL-10 and TGF-β cytokines indicative of an anti-inflammatory response. Not so surprisingly, this resulted in impaired T cell proliferation and Th1 polarization, and instead induced a Foxp3+ Treg response (Olivar et al., 2016) (Figure 2D). Together, these studies suggest a non-redundant role of complement on DC differentiation and downstream T cell response, and that perhaps AP is not the only complement pathway governing T cell/DC differentiation and APC-T cell costimulation. To add to this notion, C1q has been found to have a unique function in antigen presentation and T cell priming. It is well-established that phagocytes not only internalize pathogens, but also ingest apoptotic host cells, a process that has been termed efferocytosis, both of which are influential on the cytokine response. Many studies have identified that C1q directly associates with both apoptotic DCs and macrophages (referred to C1q-polarized APCs) (Figure 2D). This association leads to increased efferocytosis and suppression of proinflammatory cytokine production (Korb and Ahearn, 1997; Baruah et al., 2009; Teh et al., 2011; Clarke et al., 2015). These dynamics led to suppressed Th1 and Th17 cell proliferation. These studies suggest a unique function of CP component C1q on APC and T cell costimulation, further supporting the notion that complement is truly a bridge between the adaptive and innate immune response. C1q Receptors gC1qR and cC1qR Mediate APC and T Cell Stimulation C1q is central to the activation of the CP of complement, and as aforementioned has unique function on modulating APC-driven cytokine production and downstream T cell priming. Structurally, individual C1q molecules are composed of a globular head situated on the carboxy termini of the collagen stock domain, and these molecules subsequently multimerize to form the prototypical hexameric “bouquet” superstructure (Sontheimer et al., 2005). Paradoxically, the functional relevance of the two co-expressed non-transmembrane cell surface associated receptors, gC1qR (interacts with globular head of C1q) and cC1qR (interacts with collagen stock of C1q), is still ambiguous (Frachet et al., 2015). However, while their exact function still remains controversial, studies suggest they may play a vital role in the adaptive immune response. We will now highlight our current understanding of these receptors and the potential role they play in driving the adaptive immune response. C1q, as previously mentioned, is a crucial driver of pro-efferocytic signals and downstream T cell priming, although controversy still surrounds which of the C1qRs is important for this function (Nayak et al., 2012; Frachet et al., 2015). The prevailing hypothesis is that a tripartite interaction between C1q, gC1qR and cC1qR on apoptotic cells, and bystander phagocyte respectively, induces efferocytosis (Hosszu et al., 2012; Frachet et al., 2015) (Figure 2D). Recently it was shown that gC1qR, which is present on both immature DCs and blood precursor DCs, interacts with DC-SIGN (Hosszu et al., 2012). As both C1qRs lack transmembrane domains and cannot signal on their own and must recruit signaling partners, DC-SIGN is now postulated to be a transmembrane signaling partner of gC1qR. Interestingly, authors suggested that DC-SIGN also directly interacts with C1q, and given C1q’s role on DC differentiation, postulated that perhaps this trimolecular receptor complex between C1q-gC1qR-DC-SIGN might also influence DC differentiation (Hosszu et al., 2012). Additionally, gC1qR can also directly influence cytokine response. Crosstalk between gC1qR and TLR4 resulted in a dampened IL-12 response, a cytokine important for IFN-γ production and subsequently Th1 cell proliferation and differentiation (Waggoner et al., 2005). Furthermore, C1q can directly bind to T cells via C1qRs and induce anti-proliferative effects (Chen et al., 1994). It has been suggested that gC1qR is pivotal in the regulation of the antiviral T cell response. Another study found that both of the C1qR’s offered a unique function on the differentiation process from monocytes to DCs. Immature monocytes were found to have high levels of gC1qR and low levels of cC1qR, a phenotype that was reversed as the cells began to differentiation to a DC lineage. The authors theorized that C1q/C1qR system was crucial for the regulation and transition from innate monocyte state to that of a professional APC (Hosszu et al., 2010). Thus, we are still trying to understand the role that gC1qR plays in driving acquired immunity. Crosstalk between Complement and Toll-Like Receptors Toll-like receptors and CRs are both critical to the innate immune response and are co-expressed on leukocytes. Thus, it is not surprising that crosstalk between these classes of receptors has been demonstrated. Recent evidence supports a role of CR-TLR crosstalk in instructing adaptive immune response (Hajishengallis and Lambris, 2010, 2016). Mechanistically, this crosstalk has largely been observed to be at the MAPK level, specifically through the ERK1/ERK2 pathway (Hajishengallis and Lambris, 2010). It is becoming increasingly clear that the AT receptors have a multifaceted role in guiding the host immune response, that reaches beyond its canonical function in complement. As mentioned above, these receptors are crucial in antigen presentation and T cell survival/response. Interestingly, the AT receptors have also been found to influence TLR driven cytokine production and downstream T cell responses. In particular, C5aR, depending on the immunological cell, has been shown to have paradoxical function on TLR cytokine response. On DCs, it was identified that C5aR-TLR2 crosstalk could synergistically enhance TLR driven IL-12 production, which resulted in an increased Th1 response (Weaver et al., 2010). Conversely, other studies have provided evidence that C5aR-TLR2 and C5aR-TLR4 crosstalk on macrophages inhibit IL-12 family of cytokine, specifically IL-12p70, thereby diminishing the Th1 response (Figure 2D). Thus, the immunological consequence of AT receptor-TLR crosstalk seems largely dependent on the type of APC. Crosstalk between CR3 and TLRs has also been identified, which also has an influence on TLR cytokine response and quality of the adaptive immune response. Much like what was seen with C5aR, CR3 when co-activated with either TLR2 or 4, leads to suppressed IL-12 (IL-12p70) expression and accordingly a diminished Th1 response (Figure 2D). However unlike C5aR, TLRs are able to activate CR3 through a cellular process that has been termed “inside-out” signaling. Although using slightly different pathways, this has been verified for both TLR2-CR3 and TLR4-CR3 activation. CR3-TLR2 activation seems to be dependent on PI3K, meanwhile TLR4-driven CR3 activation is driven by MyD88. Engagement of CR3 alone by iC3b-coated cells has also been shown to downregulate IL-12 production (Marth and Kelsall, 1997). Additionally, gC1qR-TLR crosstalk has also been demonstrated but has redundant effects to C5aR/CR3-TLR2, -TLR4 on IL-12 expression. Taken together, these findings suggest yet another strong tie between complement and adaptive immunity. Complement Evasion by Pathogens While complement offers a potent immunological barrier, pathogenic organisms have developed counterstrategies to elude harmful responses. Pathogenic complement evasion has been extensively studied, and countless mechanisms and examples are described in literature (reviewed in Lambris et al., 2008; Stoermer and Morrison, 2011; Ricklin, 2012; Zipfel et al., 2013; Merle et al., 2015b; Garcia et al., 2016). These mechanisms include secretion of complement inhibitory molecules, recruitment of host complement regulators, and proteolytic cleavage of complement effector molecules. Most examples of complement evasion have been studied in the context of innate immune blockade, however given the close ties between complement and adaptive immunity, it is likely that these mechanisms also modulate B and T cell responses. In the remainder of this review, we highlight specific examples of pathogenic evasion of complement-mediated adaptive immunity (Table 1; Figure 1). Based on these examples, we also postulate possible new roles for known evasion molecules. Blocking B Cell Immunity C3d is the smallest of the C3 fragments that opsonizes antigens and target cells, and plays a number of critical roles in directing adaptive immunity, as described above. Interestingly, C3d is a “molecular hub” of sorts, binding not only host complement regulators and receptors, but also a multitude of immune evasion molecules. While many of these molecules have been studied extensively, most work focuses on their role in inhibition of C3 and C5 cleavage, and resulting direct complement inhibitory functions. However, some studies show that blockade of C3d can lead to impaired adaptive immune responses. Staphylococcus aureus is a Gram-positive bacterium that colonizes a large proportion of the human population. However, this bacterium can also become pathogenic, and is responsible for severe infections. S. aureus is considered a master of immune evasion. Indeed, tens of virulence factors have been identified, which influence innate immunity through inhibition of neutrophil chemotaxis and intracellular killing mechanisms, phagocytosis, complement, TLR signaling, and also by directly killing host cells. S. aureus also modulates adaptive immune responses directly, using superantigens that crosslink TCRs and MHCII molecules and hyperactivate T cell responses (Thammavongsa et al., 2015; Koymans et al., 2016). In particular, complement evasion is arguably best understood in the case of S. aureus, in which numerous evasion molecules exert control over several points in the complement cascade. Despite their well-characterized roles in complement evasion, it remains unclear whether most of these molecules influence complement-mediated adaptive immunity. Notably, two homologous virulence factors from S. aureus, Efb-C and Ecb (also known as Ehp) influence B cell immunity. Initially, these molecules were described to bind nearly all thioester-containing C3 proteins and inhibit the complement AP (Lee et al., 2004; Hammel et al., 2007a,b; Jongerius et al., 2007). A follow-up study demonstrated that both proteins can also inhibit the interaction between C3d and CR2, and prevent B cell stimulation, which may act as another evasion strategy for S. aureus (Ricklin et al., 2008). Additionally, there is a case for Sbi having a similar role, as it also competitively inhibits the C3d-CR2 interaction (Burman et al., 2008; Isenman et al., 2010). Other pathogens produce proteins that bind C3d as well. The CSP from Plasmodium berghei can bind to C3d, preventing production of antibodies against the antigen and reducing protective immunity during associated malaria infection (Bergmann-Leitner et al., 2005). The mechanism of modulating antibody response was not elucidated, but it is presumed that CSP also blocks the C3d-CR2 interaction, which interferes with stimulation of B cells in the lymphoid follicle. Candida albicans expresses MP60 and mannoproteins that bind C3d and mediate binding to host cells (López-Ribot et al., 1995; Stringaro et al., 1998), however, the functional consequences on adaptive immunity remain unclear. Some pathogens interfere with complement-mediated B cell immunity through modulating events in the complement cascade upstream of C3 deposition. HSV-1 expresses a glycoprotein (gC-1), which binds to C3b and blocks binding sites for C5 and properdin (Kostavasili et al., 1997). In addition, gC-1 can mediate decay of C3 convertase enzymes, limiting the amount of complement opsonization on the viral surface (Fries et al., 1986). While this protein is described to inhibit complement-mediated neutralization of HSV-1 (Lubinski et al., 2002), another study with C3 and CR1/2 deficient mice showed reduced IgG response to the virus, suggesting that complement is critical in development of adaptive immune response against HSV-1 (IgG and GCs) (Da Costa et al., 1999). Thus, gC provides an important evasion strategy for HSV-1 to escape complement-driven adaptive immunity. HCV influences complement activation via distinct mechanisms. The HCV core protein and NS5A target USF-1 and IRF-1, leading to inhibited mRNA transcription in infected hepatocytes, and diminished C4 levels in mice (Banerjee et al., 2011). A follow-up study showed that NS5A also inhibits C3 mRNA transcription. Indeed, human patients with HCV showed low C3 levels in serum, and biopsies showed lower C3 mRNA levels in the liver (Mazumdar et al., 2012). Another recent study showed that HCV core protein induced soluble CD55 expression, which limits complement activation in infected hepatocytes (Kwon et al., 2016). Each of these mechanisms can inhibit viral opsonization and in turn may inhibit complement-mediated B and T cell responses. Finally, Wang et al. (2016) demonstrated that HCV binds to B cells via interactions between C3 fragments and either CR1 or CR2. Thus, by inhibiting opsonization, HCV could escape recognition by B cells. Another possible mechanism was proposed for SCIN, which mediates dimerization of C3b molecules and AP C3 convertases on the bacterial surface. In doing so, SCIN effectively masks the binding sites for CR1 and CRIg, preventing phagocytic responses and cleavage of C3b to iC3b (Jongerius et al., 2010). In turn, SCIN likely inhibits CR-mediated immune adherence and trafficking, B cell stimulation, and antigen presentation. In contrast to C3d-CR2 blockade, viruses can exploit this interaction to promote their survival and pathogenesis. HIV, for example, carefully balances complement activation and regulation in order to capitalize on the links between complement and B cell immunity. While HIV virions indeed activate complement (Sullivan et al., 1998), the virus incorporates membrane-bound complement regulators from the infected host cell, including CD46, CD55, and CD59 (Montefiori et al., 1994). Additionally, HIV glycoproteins gp41 and gp120 recruit FH to the viral surface (Stoiber et al., 1995). These regulators limit the amount of C3 deposition on the virus and prevent direct lysis by MAC. During infection, HIV disseminates through the bloodstream, becomes opsonized with C3b, and encounters erythrocytes bearing CR1 (Horakova et al., 2004). These erythrocytes then bind and transport complement-opsonized virions to secondary lymphoid organs (Schifferli et al., 1988). Meanwhile, CR1 can also act as a cofactor for FI-mediated cleavage of C3b to iC3b and eventually C3d, which facilitates transfer of HIV from erythrocytes to B cells and FDCs bearing CR2. HIV leverages these interactions to promote maintenance of extracellular viral reservoirs in the lymphoid follicle, where virions are captured on CR2-expressing FDCs (Bergmann-Leitner et al., 2006). Several lines of evidence demonstrate that intact HIV virions can be maintained for over 6 months in GCs (Cavert et al., 1997). Although these virions are “trapped” and opsonized, they still retain infectivity once released (Banki et al., 2005). Most importantly, the high concentration of HIV in the lymphoid follicle promotes infection of CD4+ T cells, which is critical for the pathogenesis of the virus (Stoiber et al., 2008). Inhibiting T Cell Immunity through CR Engagement CRs play a critical role in T cell proliferation and differentiation, as described in detail above. Thus, many pathogens have also evolved strategies to engage these receptors in order to modulate T cell immunity in their favor. As described above, HCV produces multiple proteins that modulate levels of complement proteins in infected hepatocytes. In particular, the HCV core protein inhibits C3 mRNA transcription, while upregulating soluble CD55 production. Another important role of HCV core protein includes inhibition of T cell proliferation through its interaction with gC1qR on DCs, B cells, and T cells (Kittlesen et al., 2000). Binding of HCV to gC1qR on activated T cells decreases IL-2 and IFN-γ production (Yao et al., 2001a). Engagement of gC1qR on DCs inhibited IL-12 production, and promoted a shift from Th1 to Th2 response (Waggoner et al., 2007). A later study showed that patients with chronic HCV infection had persistent elevated levels of gC1qR+ CD4+ T cells, which increased the susceptibility to viral evasion via HCV core protein (Cummings et al., 2009). Other viruses, including HIV, EBV, and HSV, are also known to engage gC1qR, and may induce similar effects on T cell immunity (Yao et al., 2001b; Fausther-Bovendo et al., 2010). Similarly to HCV, HIV can also engage gC1qR. HIV expresses glycoprotein gp41 on its surface, which during fusion with the membrane of infected CD4+ T cells, can interact with gC1qR on uninfected T cells. This interaction induces expression of NKp44L, a ligand for cytotoxicity receptor NKp44 on NK cells. This mechanism causes NK cells to selectively deplete uninfected CD4+ T cells during HIV pathogenesis (Fausther-Bovendo et al., 2010). Numerous other pathogens produce molecules that bind gC1qR, which may enable them to subvert undesired T cell responses. For instance, S. aureus produces SpA, a protein known to bind many ligands and contributing to the virulence of numerous strains. Perhaps the most well-known function involves binding of IgG molecules via their Fc region, in order to avoid bacterial opsonization and subsequent phagocytosis. SpA was also reported to bind gC1qR on platelets, and while the exact physiological role of this interaction is unclear, it may be involved in adherence of S. aureus to sites of vascular injury (Nguyen et al., 2000). Based on the aforementioned evasion mechanisms, it is plausible that S. aureus uses SpA to bind gC1qR on T cells, in order to downregulate IL-12 and prevent Th1 response. In addition, Listeria monocytogenes, another infectious bacterium, expresses another gC1qR-binding protein called InlB. The bacterium utilizes InlB to enter gC1qR-expressing mammalian cells, facilitating intracellular uptake and survival of the bacterium. InlB may allow specific entry of L. monocytogenes into CD4+ T cells expressing gC1qR as well, simultaneously escaping aggressive Th1 response (Braun et al., 2000). Aside from gC1qR, CD46 is another CR present on T cells, which plays an important role in modulating T cell response. Group A Streptococcus (Streptococcus pyogenes), a Gram-positive bacterial pathogen, comprises numerous immune evasion mechanisms, similarly to S. aureus. S. pyogenes strains are characterized by the expression of distinct M proteins that protrude from and mask the bacterial surface. Different M serotypes bind a wide variety of host proteins and confer bacterial resistance to immune responses. A recent study showed that several M serotypes can bind directly to CD46 on humans CD4+ T cells, resulting in the induction of IL-10 secreting regulatory T cells upon costimulation with an anti-CD3 antibody. These data suggest that M protein, by exploiting the immunomodulatory function of CD46, could delay effector T cell response and allow S. pyogenes to further establish infection (Price et al., 2005). Several additional bacteria and viruses, including Measles virus, HHV-6, Escherichia coli, and Neisseria bind CD46 and influence cytokine profiles of APCs and T cells, but it remains unclear whether these pathogens engage and modulate T cells directly (Cattaneo, 2004; Kemper and Atkinson, 2009). Measles virus hemagglutinin is known to bind CD46, which results in IL-12 downregulation in primary monocytes and DCs (Karp et al., 1996; Marie et al., 2001). In a later study, transgenic mice expressing either of the two human CD46 molecules (with differing cytoplasmic tails) were injected with vesicular stomatitis virus expressing measles virus hemagglutinin. Mice expressing CD46-1 showed enhanced IL-10 production and immunosuppressive response, while those expressing CD46-2 exhibited enhanced inflammatory response (Marie et al., 2002). These results are further complicated with the observation that Measles virus downregulates CD46 on T cells in vivo, and preferentially engages an alternate receptor (Yanagi et al., 2006). Thus, the role of CD46 in Measles virus pathogenesis remains somewhat unclear. Mycobacterium leprae, the pathogen responsible for causing leprosy, engages CD46 using a slightly different mechanism. It has long been known that M. leprae negatively affected DC maturation and downstream T cell response, but how this occurred remained elusive due to lack of appropriate models. However, using non-virulent M. bovis BCG, it was identified that PGL-1, which had been suspected to influence the pathogenesis of M. leprae, was the culprit. In fact, PGL-1 was able to exploit CR3 to allow entry of BCG into DCs, which subsequently led to dampened DC maturation (Tabouret et al., 2010). A parallel study found that PGL-1 was exposed on surface of infected DCs and lipid rafts of T cells, where it was able to bind C3. Interestingly, CD46 on T cells was able to recognize this C3-PGL-1 complex on M. leprae infected human DCs resulting in the differentiation of IL-10 producing Tregs (Callegaro-Filho et al., 2010). Together these studies provide evidence that mycobacterium, by associating with various complement components, can directly affect the outcome of the adaptive immune response. In addition, a new study, found that FH plays a defining role in the internalization process of M. bovis BCG in macrophages. The authors found that FH, by directly binding to the M. bovis BCG, could affect the uptake of M. bovis BCG and thereby alter the cytokine response. While it remains unclear FH binding might affect the outcome of the adaptive immune response, it is clear mycobacterium affects multiple parts of the complement cascade, leading to altered T cell response (Abdul-Aziz et al., 2016). In addition to the mechanisms outlined, CD46-mediated T cell regulation could be a general evasion mechanism for pathogens. Upon complement activation, pathogens become opsonized by C3b, which can bind CD46 on T cells. In combination with CD3 costimulation, this event could help drive IL-10 induction and regulatory T cell response against these microbes. Furthermore, complement-opsonized pathogens that are internalized by T cells could achieve a similar response by engaging intracellular CD46. Indirectly Modulating T Cell Immunity via APCs In the last decade, numerous links between complement activation and TLR signaling have been uncovered, which can drive adaptive immune responses (Hajishengallis and Lambris, 2010). Likewise, pathogens have developed strategies to evade these immune mechanisms (Hajishengallis and Lambris, 2011). Porphyromonas gingivalis is a Gram-negative bacterium that is known to cause dysbiosis within the periodontal microbiome. This bacterium is a prime example of how exploitation of CR-TLR crosstalk can direct adaptive immunity. P. gingivalis produces two enzymes, HRgpA and RgpB, which directly cleave C5 into C5a and C5b. The resulting C5a can then engage C5aR1, while bacterium simultaneously binds TLR2 (Wang et al., 2010), and the C5aR1-TLR2 crosstalk can influence cytokine profiles through different signaling cascades. Interestingly, the functional role of these interactions is entirely different depending on the cell type involved. In macrophages, C5aR1-TLR2 crosstalk can selectively inhibit IL-12 and IFN-γ production, through induction of PI3K and ERK1/2 signaling. This evasion mechanism allows the pathogen to induce a customized adaptive immune response, preventing Th1 induction (Liang et al., 2011). At the same time, this signaling leads to increased cAMP production and leads to downregulation of antimicrobial nitric oxide production. However, in neutrophils, C5aR1-TLR2 signaling mediates proteosomal degradation of MyD88 and activation of an alternate signaling pathway, which inhibits phagocytosis, but maintains inflammatory responses to propagate periodontal dysbiosis (Maekawa et al., 2014). In contrast, C5aR1 signaling mediates bacterial killing in DCs. This evidence indicates that the pathogen has tailored leukocyte evasion to its environment, where it primarily encounters neutrophils and macrophages, but not DCs (Hajishengallis and Lambris, 2016). The mechanisms underlying the variable signaling are still unclear. Porphyromonas gingivalis also utilizes an additional mechanism to influence IL-12 production via CR-TLR crosstalk. The fimbrial proteins, expressed on the bacterial surface, engage CR3 and mediate internalization and intracellular survival within macrophages, as well as reduced IL-12 production. This mechanism was shown to be dependent on TLR2, which also binds fimbrial proteins and induces inside-out signaling that activates CR3 (Wang et al., 2007). Experiments in mice showed that blockade of CR3 restored IL-12-mediated pathogen clearance, thus demonstrating that this immune mechanism also promotes pathogen survival and escape of adaptive immune response (Hajishengallis et al., 2007). In addition, numerous other pathogens can bind both CRs and TLRs to infect host cells, inhibit IL-12 production, and direct T cell immunity (Hajishengallis and Lambris, 2011, 2016). Bacillus anthracis spores infect professional phagocytes in their host to promote their growth and survival. The outer layer of B. anthracis spores contain a glycoprotein, BclA, which mediates cell internalization via CR3 (Oliva et al., 2008). Since CR3 must be activated in order to facilitate internalization, it was unclear how BclA leveraged this receptor to invade host cells. Interestingly, a subsequent study found that BclA also binds CD14, which induces TLR2-mediated inside-out signaling to activate CR3 and internalize B. anthracis spores (Oliva et al., 2009). Francisella tularensis strain Schu S4 also subverts immune response via CR3 and TLR2. C3-opsonized F. tularensis is internalized by macrophages via CR3, and through outside-in signaling mechanisms, blocks TLR2-mediated proinflammatory cytokine production (Dai et al., 2013). Bordetella pertussis FHA inhibits IL-12 production and binds CR3, but it remains unclear whether these events are connected (Ishibashi et al., 1994; McGuirk and Mills, 2000). Another interesting CR-TLR evasion mechanism was recently uncovered for HIV, in which the complement-opsonized virus engages both CR3 and TLR8 outside and within DCs, respectively. The CR3-TLR8 signaling crosstalk led to reduced antiviral and inflammatory cascades, while promoting viral transcription and replication (Ellegård et al., 2014). Whether or not these CR-TLR crosstalk events direct T cell responses requires further investigation. It is possible that coengagement of CRs and TLRs represents a general strategy of immune modulation, since most opsonized microbes are likely to engage both simultaneously. The question remains as to whether these events trigger pathways involved in clearance or killing, or whether they promote pathogenic survival. In the case of P. gingivalis, for example, C5aR1-TLR2 coengagement can induce many different immunological outcomes depending on cell type. Thus, further investigation of these signaling mechanisms may be critical for understanding how pathogens exploit CR-TLR crosstalk. One related evasion strategy was recently proposed for S. aureus. As described above, activation of both TLR2 and C5aR1 drives Th1 response in splenic DCs. S. aureus expresses both TLR2 inhibitors (i.e., SSL3) and C5a inhibitors (i.e., CHIPS), which can inhibit C5aR signaling in sDCs, and thereby shift T cell response toward Th17 (Weaver et al., 2010). It has also been established that microbes use CR3 to promote immunologically silent entry into host cells (Marth and Kelsall, 1997). Many pathogens cleave C3b to iC3b on their surface, which promotes binding to CR3. This is achieved through recruitment of host complement regulators, host proteases, or secretion of endogenously expressed proteases (Potempa and Potempa, 2012). Some strains of the eukaryotic parasite Leishmania produce a glycoprotein (gp63) on the surface of the parasite that can cleave C3b into inactive form iC3b, resulting in the inhibition of convertase formation and terminal complement activation (Brittingham et al., 1995). Allowing iC3b to associate with CR3 on APCs inhibits IL-12 production (Marth and Kelsall, 1997), and although not directly shown, results in altered T cell response (Da Silva et al., 1989). As described above, HIV also exploits CR3 to infect immune cells while avoiding harmful inflammatory cascades. In order to improve its chances at CR3 recognition, HIV glycoproteins gp120 and gp41 can recruit FH and mediate C3b breakdown to iC3b, and gp41 may even engage CR3 directly (Stoiber et al., 1995). Other Mechanisms of Complement Inhibition There are several examples in which pathogens inhibit complement activation and influence T cell immunity, through mechanisms that are not fully understood. Poxviruses are a large family of viruses (69 members) including smallpox virus and vaccinia virus, which employ novel strategies to evade complement-mediated recognition and clearance. These viruses express complement regulator mimics, which are comprised of up to four CCP domains. These proteins bind C3b and C4b in a similar manner to host complement regulatory proteins, promoting convertase decay and opsonin cleavage (Ojha et al., 2014). VCP from vaccinia virus is one example that, in addition to its direct complement regulatory role, can inhibit CD4+ and CD8+ T cell responses. In an intradermal infection model, mice infected with a VCP-knockout strain of vaccinia virus had increased numbers of CD4+ and CD8+ T cells at the site of infection, and exhibited increased T-dependent antibody response, compared to infection with the wild-type virus. Interestingly, no difference in T cell responses was observed in C3-/- mice, indicating that the role of VCP in inhibition of T cell responses is complement dependent. The authors of this study suggest that functionally homologous molecules from other poxviruses may play a similar role in viral evasion of T cell immunity (Girgis et al., 2011). Poxviruses are not the only family of virus that evades complement mediated adaptive response; flaviviruses do as well. Flaviviruses are positive stranded RNA viruses, and include viruses that to this day can be deadly, such as dengue virus. One strong example of the mechanisms used by flavivirues to evade the host immune defense comes from WNV. Adaptive immune response against WNV is dependent on all three complement pathways. In particular, the AP drives CD4+ and CD8+ T cell responses directly, without affecting antibody titers, as demonstrated by FB-/- mice (Mehlhop and Diamond, 2006). NS1 from WNV, in similarity to NS1 from other flaviviruses, binds FH and facilitates FI-mediated cleavage of C3b (Chung et al., 2006). Subsequent studies showed that NS1 can form a complex with both C1s and C4, causing rapid fluid-phase consumption of C4 and preventing C4b deposition on the viral surface (Avirutnan et al., 2010). Finally, NS1 can also bind C4BP and promote cleavage of C4b, blocking C3 activation via the CP and LP (Avirutnan et al., 2011). Additionally, mice lacking C3 or CR1/CR2 genes exhibited suppressed humoral immunity against WNV (Mehlhop et al., 2005). This multifaceted blockade of the complement cascade not only prevents direct complement-mediated viral clearance, but would likely also inhibit T cell immunity critical for viral clearance, since these responses are dependent on C3 and the AP (Suthar et al., 2013). Although direct evidence has only been shown for WNV, it is possible that NS1 from other flaviviruses use a similar mechanism to evade complement mediated adaptive immune response. Thus, future studies should focus on the role of NS1 from other deadly flaviviruses in directing the host immune defense. Other viruses secrete molecules that directly block complement activation, which may dampen adaptive immune responses. For example, the M1 protein from influenza virus blocks the interaction between IgG and C1q, thereby inhibiting CP activation of complement (Zhang et al., 2009). Independent studies showed that blockade of C5aR in an influenza infection model abrogated CD8+ T cell response (Kim et al., 2004). Furthermore, mice deficient in C3 exhibited diminished migration of CD4+ and CD8+ T cells during influenza infection, resulting in delayed viral clearance and increased viral titers. CR1/CR2 deficiency had no effect on this phenomenon, indicating that it is not driven by complement-mediated B cell stimulation (Kopf et al., 2002). Thus, M1, by inhibiting CP complement activation, can prevent both complement-mediated and T cell-mediated clearance of influenza virus. A recent groundbreaking study showed an intracellular role of C3, independent of complement activation. After pathogens tagged with C3 extracellularly (via the canonical complement activation cascades) are internalized, C3 products are detected, and triggers NF-κB, IRF-3/5/7, and AP-1 signaling cascades, leading to the release of proinflammatory cytokines, and proteosomal degradation of infecting viruses (Tam et al., 2014). Certain viruses (i.e., rhinovirus and poliovirus) encode proteases that can degrade opsonizing C3 fragments, promoting their intracellular survival. Overall, it is likely that complement inhibition or modulation on any level will influence adaptive immune responses against pathogenic organisms. We have described numerous mechanisms by which regulating opsonization, promoting complement cleavage, interfering with CR signaling, and direct engagement of CRs can alter B and T cell responses. Further studies are required to understand the roles of the diverse array of immune evasion molecules on the adaptive fitness of their respective organisms. Although complement was once regarded as a self-contained effector arm of innate immunity, its collaboration with other immunological phenomena is continually emerging. Indeed, complement has been long known to bridge innate and adaptive immunity through the C3d-CR2 interaction, mediating B cell activation, antigen presentation, and generation of immunological memory. More recently, complement has been implicated in driving T cell immunity, both through modulating cytokine profiles in APCs, as well as through direct engagement with receptors on and inside T cells. While our view of complement has expanded dramatically, we have likely just scratched the surface. In the era of crosstalk between the different branches of immunity and beyond, many additional roles of complement are waiting to be discovered. Complement evasion is well-documented among microorganisms. Indeed, many immune evasion molecules have been discovered, and their mechanisms of complement modulation are now clearly understood. In light of the numerous links between complement and adaptive immunity, the question arises as to whether complement evasion molecules modulate B and T cell responses. In general, there is a relative lack of studies describing how pathogens exploit complement-mediated adaptive immunity. This is due, in large part, to the infancy of the field. Many of these phenomena were discovered within the last 5–10 years. Additionally, the complexity of these immune processes makes them difficult to study. Although some complement evasion molecules are linked to altered T cell responses, determining the underlying molecular mechanisms is challenging. Recent advancement in various sequencing, mouse, and cytometric technologies affords the opportunity to address these more complex and fundamental questions regarding complement and T cell biology. However, the species specificity of many immune evasion molecules makes it difficult to study their effects in mouse models. Finally, there is the notion that bacterial immunity is primarily handled by innate effector functions. Indeed, while viral clearance is primarily T cell mediated, bacteria are recognized in the extracellular milieu by pattern recognition molecules and receptors, which mount a rapid and sometimes aggressive innate immune response against the invading pathogen. These responses often also promote adaptive immune responses and the generation of immunological memory, but historically these responses are considered secondary to innate immunity. However, antibodies are often essential for efficient complement activation on bacteria. Accordingly, while bacteria have huge arsenals of virulence factors targeting innate immune components (including complement), direct evasion of adaptive immunity seems less prevalent, though it does exist (i.e., superantigens of S. aureus). With the continuing discovery of diverse roles of complement in directing adaptive immunity, and new tools available to study interactions of complement and leukocytes, it is likely that many bacterial complement inhibitors may block B and T cell responses. Despite its important role in controlling microbial infections, complement is also implicated in numerous autoimmune and inflammatory conditions. Excessive and uncontrolled complement activation plays a role in many diseases, either directly (i.e., aHUS, PNH) or indirectly (i.e., RA, SLE, organ transplantation) (Ricklin and Lambris, 2013; Morgan and Harris, 2015). This activation can also drive adverse adaptive immune responses. In SLE, for example, complement activation on damaged or apoptotic cells may lead to generation of autoreactive antibodies promoted by C3d-CR2 B cell stimulation (Holers, 2014). Additionally, recent evidence has shown that complement drives both B and T cell responses during transplantation (Sacks and Zhou, 2012). Thus, microbial evasion molecules may hold promise for directing complement-mediated adaptive immune responses for treatment of a new array of disease, which were previously considered not to be amenable to complement modulatory strategies. Conversely, the power of complement can be harnessed to promote favorable adaptive immune responses. Since the discovery of its adjuvant potential, C3d has been exploited for development of more potent vaccines (Toapanta and Ross, 2006). Furthermore, there are possibilities to leverage known mechanisms of complement-mediated T cell immunity for treatment of infectious diseases and cancer. The physiological role of complement has greatly expanded in recent years. It is now accepted that complement is a crucial mediator of adaptive immune responses. In addition to its long-known role in regulating B cell immunity via C3d-CR2, more recent work has established a multifaceted approach by which complement drives T cell responses. These mechanisms include direct engagement between complement activation products with CRs on T cells, indirect regulation through APC engagement, and modulation of cytokine profiles through CR-TLR crosstalk. It is only natural that pathogens, in their struggle for survival, have developed strategies to overcome these immune pathways. While numerous evasion mechanisms of complement-mediated adaptive immunity have been characterized, it is likely that many other evasion strategies remain undiscovered, including those mediated by known immune evasion molecules. A better understanding of pathogenic modulation of complement-mediated adaptive immunity is prerequisite to capitalizing on the therapeutic potential of immune evasion molecules. KB, SR, and RG conceived the concept for this review article. KB and RG wrote the manuscript. KB, SR, and RG read, edited, and reviewed the manuscript. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work was financially supported by a European Research Council Starting Grant (no. 639209) to SR, and a Marie Sklodowska-Curie research fellowship (no. 659633) to RG. aHUS, atypical hemolytic uremic syndrome; AP, alternative pathway; APC, antigen presenting cell; AT, anaphylotoxin; B. anthracis, Bacillus anthracis; B. pertussis, Bordetella pertussis; BclA, Bacillus collagen-like protein of anthracis; BCR, B cell receptor; C4BP, C4b-binding protein; cC1qR, collagen C1q receptor; CD46, membrane cofactor protein; CD55, decay accelerating factor; CCP, complement control protein; CP, classical pathway; CR, complement receptor; CRIg, complement receptor of immunoglobulin superfamily; CSP, circumsporozoite protein; CSR, class switch recombination; CTSL, cathepsin L; DC, dendritic cell; EBV, Epstein-Barr virus; Ecb, extracellular-complement binding protein; Efb-C, extracellular fibrinogen-binding protein; ERK, extracellular signal related kinase; F. tularensis, Francisella tularensis; FB, factor B; FD, factor D; FH, factor H; FHA, filamentous hemagglutinin; FI, factor I; gC1qR, globular C1q receptor; g, gp, or GP, glycoprotein; GC, germinal center; HCV, hepatitis C virus; HHV-6, human herpes virus 6; HRgpA and RgpB, arginine-specific gingipain; HSV, herpes simplex virus; InlB, internalin B; IRF-1, interferon regulatory factor 1; L. monocytogenes, Listeria monocytogenes; LP, lectin pathway; M. bovis BCG, Mycobacterium bovis Bacillus Calmette-Guerin; M. leprae, Mycobacterium leprae; MAC, membrane attack complex; MAPK, mitogen-activate protein kinase; MASPs, MBL-associated serine proteases; MBL, mannose-binding lectin; NS, non-structural protein; P. gingivalis, Porphyromonas gingivalis; PGL-1, phenolic glycolipid 1; PI3K, phosphoinositide 3-kinase; PNH, paroxysmal nocturnal hemoglobinuria; RA, rheumatoid arthritis; S. aureus, Staphylococcus aureus; S. pyogenes, Streptococcus pyogenes; Sbi, Staphylococcus binder of IgG; SCIN, staphylococcal complement inhibitor; SLE, systemic lupus erythematosus; SpA, staphylococcal protein A; TCR, T cell receptor; Th, T helper cell; TLR, Toll-like receptor; Treg, regulatory T cell; USF-1, upstream stimulating factor 1; VCP, vaccinia control protein; WNV, West Nile virus. Abdul-Aziz, M., Tsolaki, A. G., Kouser, L., Carroll, M. V., Al-Ahdal, M. N., Sim, R. B., et al. (2016). Complement factor H interferes with Mycobacterium bovis BCG entry into macrophages and modulates the pro-inflammatory cytokine response. Immunobiology 221, 944–952. doi: 10.1016/j.imbio.2016.05.011 Arbore, G., West, E. E., Spolski, R., Robertson, A. A. B., Klos, A., Rheinheimer, C., et al. (2016). T helper 1 immunity requires complement-driven NLRP3 inflammasome activity in CD4+ T cells. Science 352, aad1210–aad1210. doi: 10.1126/science.aad1210 Avirutnan, P., Fuchs, A., Hauhart, R. E., Somnuke, P., Youn, S., Diamond, M. S., et al. (2010). Antagonism of the complement component C4 by flavivirus nonstructural protein NS1. J. Exp. Med. 207, 793–806. doi: 10.1084/jem.20092545 Avirutnan, P., Hauhart, R. E., Somnuke, P., Blom, A. M., Diamond, M. S., and Atkinson, J. P. (2011). Binding of flavivirus nonstructural protein NS1 to C4b binding protein modulates complement activation. J. Immunol. 187, 424–433. doi: 10.4049/jimmunol.1100750 Banerjee, A., Mazumdar, B., Meyer, K., Di Bisceglie, A. M., Ray, R. B., and Ray, R. (2011). Transcriptional repression of C4 complement by hepatitis C virus proteins. J. Virol. 85, 4157–4166. doi: 10.1128/JVI.02449-10 Banki, Z., Kacani, L., Rusert, P., Pruenster, M., Wilflingseder, D., Falkensammer, B., et al. (2005). Complement dependent trapping of infectious HIV in human lymphoid tissues. AIDS 19, 481–486. doi: 10.1097/01.aids.0000162336.20439.8d Baruah, P., Dumitriu, I. E., Malik, T. H., Cook, H. T., Dyson, J., Scott, D., et al. (2009). C1q enhances IFN-γ production by antigen-specific T cells via the CD40 costimulatory pathway on dendritic cells. Blood 113, 3485–3493. doi: 10.1182/blood-2008-06-164392 Bergmann-Leitner, E. S., Leitner, W. W., and Tsokos, G. C. (2006). Complement 3d: from molecular adjuvant to target of immune escape mechanisms. Clin. Immunol. 121, 177–185. doi: 10.1016/j.clim.2006.07.001 Bergmann-Leitner, E. S., Scheiblhofer, S., Weiss, R., Duncan, E. H., Leitner, W. W., Chen, D., et al. (2005). C3d binding to the circumsporozoite protein carboxy-terminus deviates immunity against malaria. Int. Immunol. 17, 245–255. doi: 10.1093/intimm/dxh205 Blom, A. M., Villoutreix, B. O., and Dahlback, B. (2004). Complement inhibitor C4b-binding protein—friend or foe in the innate immune system? Mol. Immunol. 40, 1333–1346. doi: 10.1016/j.molimm.2003.12.002 Braun, L., Ghebrehiwet, B., and Cossart, P. (2000). gC1q-R/p32, a C1q-binding protein, is a receptor for the InlB invasion protein of Listeria monocytogenes. EMBO J. 19, 1458–1466. doi: 10.1093/emboj/19.7.1458 Brittingham, A., Morrison, C. J., McMaster, W. R., McGwire, B. S., Chang, K. P., and Mosser, D. M. (1995). Role of the Leishmania surface protease gp63 in complement fixation, cell adhesion, and resistance to complement-mediated lysis. J. Immunol. 155, 3102–3111. Broadley, S. P., Plaumann, A., Coletti, R., Lehmann, C., Wanisch, A., Seidlmeier, A., et al. (2016). Dual-track clearance of circulating bacteria balances rapid restoration of blood sterility with induction of adaptive immunity. Cell Host Microbe 20, 36–48. doi: 10.1016/j.chom.2016.05.023 Brodeur, S. R., Angelini, F., Bacharier, L. B., Blom, A. M., Mizoguchi, E., Fujiwara, H., et al. (2003). C4b-binding protein (C4BP) activates B cells through the CD40 receptor. Immunity 18, 837–848. doi: 10.1016/S1074-7613(03)00149-3 Burman, J. D., Leung, E., Atkins, K. L., O’Seaghdha, M. N., Lango, L., Bernado, P., et al. (2008). Interaction of human complement with Sbi, a staphylococcal immunoglobulin-binding protein – Indications of a novel mechanism of complement evasion by Staphylococcus aureus. J. Biol. Chem. 283, 17579–17593. doi: 10.1074/jbc.M800265200 Cardone, J., Le Friec, G., Vantourout, P., Roberts, A., Fuchs, A., Jackson, I., et al. (2010). Complement regulator CD46 temporally regulates cytokine production by conventional and unconventional T cells. Nat. Immunol. 11, 862–871. doi: 10.1038/ni.1917 Carter, R. H., Spycher, M. O., Ng, Y. C., Hoffman, R., and Fearon, D. T. (1988). Synergistic interaction between complement receptor type 2 and membrane IgM on B lymphocytes. J. Immunol. 141, 457–463. Cavert, W., Notermans, D. W., Staskus, K., Wietgrefe, S. W., Zupancic, M., Gebhard, K., et al. (1997). Kinetics of response in lymphoid tissues to antiretroviral therapy of HIV-1 infection. Science 276, 960–964. doi: 10.1126/science.276.5314.960 Chen, A., Gaddipati, S., Hong, Y., Volkman, D. J., Peerschke, E. I., and Ghebrehiwet, B. (1994). Human T cells express specific binding sites for C1q. Role in T cell activation and proliferation. J. Immunol. 153, 1430–1440. Cherukuri, A., Cheng, P. C., and Pierce, S. K. (2001). The role of the CD19/CD21 complex in B cell processing and presentation of complement-tagged antigens. J. Immunol. 167, 163–172. doi: 10.4049/jimmunol.167.1.163 Chung, K. M., Liszewski, M. K., Nybakken, G., Davis, A. E., Townsend, R. R., Fremont, D. H., et al. (2006). West Nile virus nonstructural protein NS1 inhibits complement activation by binding the regulatory protein factor H. Proc. Natl. Acad. Sci. U.S.A. 103, 19111–19116. doi: 10.1073/pnas.0605668103 Clarke, E. V., Weist, B. M., Walsh, C. M., and Tenner, A. J. (2015). Complement protein C1q bound to apoptotic cells suppresses human macrophage and dendritic cell-mediated Th17 and Th1 T cell subset proliferation. J. Leukoc. Biol. 97, 147–160. doi: 10.1189/jlb.3A0614-278R Cummings, K. L., Rosen, H. R., and Hahn, Y. S. (2009). Frequency of gC1qR+CD4+ T cells increases during acute hepatitis C virus infection and remains elevated in patients with chronic infection. Clin. Immunol. 132, 401–411. doi: 10.1016/j.clim.2009.05.002 Da Costa, X. J., Brockman, M. A., Alicot, E., Ma, M., Fischer, M. B., Zhou, X., et al. (1999). Humoral response to herpes simplex virus is complement-dependent. Proc. Natl. Acad. Sci. U.S.A. 96, 12708–12712. doi: 10.1073/pnas.96.22.12708 Da Silva, R. P., Hall, B. F., Joiner, K. A., and Sacks, D. L. (1989). CR1, the C3b receptor, mediates binding of infective Leishmania major metacyclic promastigotes to human macrophages. J. Immunol. 143, 617–622. Dai, S., Rajaram, M. V. S., Curry, H. M., Leander, R., and Schlesinger, L. S. (2013). Fine tuning inflammation at the front door: macrophage complement receptor 3-mediates phagocytosis and immune suppression for Francisella tularensis. PLoS Pathog. 9:e1003114. doi: 10.1371/journal.ppat.1003114 Dempsey, P. W., Allison, M. E., Akkaraju, S., Goodnow, C. C., and Fearon, D. T. (1996). C3d of complement as a molecular adjuvant: bridging innate and acquired immunity. Science 271, 348–350. doi: 10.1126/science.271.5247.348 Ellegård, R., Crisci, E., Burgener, A., Sjöwall, C., Birse, K., Westmacott, G., et al. (2014). Complement opsonization of HIV-1 results in decreased antiviral and inflammatory responses in immature dendritic cells via CR3. J. Immunol. 193, 4590–4601. doi: 10.4049/jimmunol.1401781 Fausther-Bovendo, H., Vieillard, V., Sagan, S., Bismuth, G., and Debré, P. (2010). HIV gp41 engages gC1qR on CD4+ T cells to induce the expression of an NK ligand through the PIP3/H2O2 pathway. PLoS Pathog. 6:e1000975. doi: 10.1371/journal.ppat.1000975 Frachet, P., Tacnet-Delorme, P., Gaboriaud, C., and Thielens, N. M. (2015). “Role of C1q in efferocytosis and self-tolerance — links with autoimmunity,” in Autoimmunity – Pathogenesis, Clinical Aspects and Therapy of Specific Autoimmune Diseases, ed. K. Chatzidionysiou (Rijeka: In Tech), doi: 10.5772/60519 Fries, L. F., Friedman, H. M., Cohen, G. H., Eisenberg, R. J., Hammer, C. H., and Frank, M. M. (1986). Glycoprotein C of herpes simplex virus 1 is an inhibitor of the complement cascade. J. Immunol. 137, 1636–1641. Garcia, B. L., Zwarthoff, S. A., Rooijakkers, S. H. M., and Geisbrecht, B. V. (2016). Novel evasion mechanisms of the classical complement pathway. J. Immunol. 197, 2051–2060. doi: 10.4049/jimmunol.1600863 Girgis, N. M., DeHaven, B. C., Xiao, Y., Alexander, E., Viner, K. M., and Isaacs, S. N. (2011). The Vaccinia virus complement control protein modulates adaptive immune responses during infection. J. Virol. 85, 2547–2556. doi: 10.1128/JVI.01474-10 Hajishengallis, G., and Lambris, J. D. (2016). More than complementing tolls: complement–Toll-like receptor synergy and crosstalk in innate immunity and inflammation. Immunol. Rev. 274, 233–244. doi: 10.1111/imr.12467 Hajishengallis, G., Shakhatreh, M.-A. K., Wang, M., and Liang, S. (2007). Complement receptor 3 blockade promotes IL-12-mediated clearance of Porphyromonas gingivalis and negates its virulence in vivo. J. Immunol. 179, 2359–2367. doi: 10.4049/jimmunol.179.4.2359 Hammel, M., Sfyroera, G., Pyrpassopoulos, S., Ricklin, D., Ramyar, K. X., Pop, M., et al. (2007a). Characterization of Ehp, a secreted complement inhibitory protein from Staphylococcus aureus. J. Biol. Chem. 282, 30051–30061. doi: 10.1074/jbc.M704247200 Hammel, M., Sfyroera, G., Ricklin, D., Magotti, P., Lambris, J. D., and Geisbrecht, B. V. (2007b). A structural basis for complement inhibition by Staphylococcus aureus. Nat. Immunol. 8, 430–437. doi: 10.1038/ni1450 Heeger, P. S., Lalli, P. N., Lin, F., Valujskikh, A., Liu, J., Muqim, N., et al. (2005). Decay-accelerating factor modulates induction of T cell immunity. J. Exp. Med. 201, 1523–1530. doi: 10.1084/jem.20041967 Heesters, B. A., Chatterjee, P., Kim, Y.-A., Gonzalez, S. F., Kuligowski, M. P., Kirchhausen, T., et al. (2013). Endocytosis and recycling of immune complexes by follicular dendritic cells enhances B cell antigen binding and activation. Immunity 38, 1164–1175. doi: 10.1016/j.immuni.2013.02.023 Helmy, K. Y., Katschke, K. J., Gorgani, N. N., Kljavin, N. M., Elliott, J. M., Diehl, L., et al. (2006). CRIg: a macrophage complement receptor required for phagocytosis of circulating pathogens. Cell 124, 915–927. doi: 10.1016/j.cell.2005.12.039 Horakova, E., Gasser, O., Sadallah, S., Inal, J. M., Bourgeois, G., Ziekau, I., et al. (2004). Complement mediates the binding of HIV to erythrocytes. J. Immunol. 173, 4236–4241. doi: 10.4049/jimmunol.173.6.4236 Hosszu, K. K., Santiago-Schwarz, F., Peerschke, E. I. B., and Ghebrehiwet, B. (2010). Evidence that a C1q/C1qR system regulates monocyte-derived dendritic cell differentiation at the interface of innate and acquired immunity. Innate Immun. 16, 115–127. doi: 10.1177/1753425909339815 Hosszu, K. K., Valentino, A., Vinayagasundaram, U., Vinayagasundaram, R., Joyce, M. G., Ji, Y., et al. (2012). DC-SIGN, C1q, and gC1qR form a trimolecular receptor complex on the surface of monocyte-derived immature dendritic cells. Blood 120, 1228–1236. doi: 10.1182/blood-2011-07-369728 Isenman, D. E., Leung, E., Mackay, J. D., Bagby, S., and van den Elsen, J. M. H. (2010). Mutational analyses reveal that the staphylococcal immune evasion molecule Sbi and complement receptor 2 (CR2) share overlapping contact residues on C3d: implications for the controversy regarding the CR2/C3d cocrystal structure. J. Immunol. 184, 1946–1955. doi: 10.4049/jimmunol.0902919 Ishibashi, Y., Claus, S., and Relman, D. A. (1994). Bordetella pertussis filamentous hemagglutinin interacts with a leukocyte signal transduction complex and stimulates bacterial adherence to monocyte CR3 (CD11b/CD18). J. Exp. Med. 180, 1225–1233. doi: 10.1084/jem.180.4.1225 Jongerius, I., Kohl, J., Pandey, M. K., Ruyken, M., van Kessel, K. P. M., van Strijp, J. A. G., et al. (2007). Staphylococcal complement evasion by various convertase-blocking molecules. J. Exp. Med. 204, 2461–2471. doi: 10.1084/jem.20070818 Jongerius, I., Puister, M., Wu, J., Ruyken, M., van Strijp, J. A. G., and Rooijakkers, S. H. M. (2010). Staphylococcal complement inhibitor modulates phagocyte responses by dimerization of convertases. J. Immunol. 184, 420–425. doi: 10.4049/jimmunol.0902865 Karp, C. L., Wysocka, M., Wahl, L. M., Ahearn, J. M., Cuomo, P. J., Sherry, B., et al. (1996). Mechanism of suppression of cell-mediated immunity by measles virus. Science 273, 228–231. doi: 10.1126/science.273.5272.228 Kim, A. H. J., Dimitriou, I. D., Holland, M. C. H., Mastellos, D., Mueller, Y. M., Altman, J. D., et al. (2004). Complement C5a receptor is essential for the optimal generation of antiviral CD8+ T cell responses. J. Immunol. 173, 2524–2529. doi: 10.4049/jimmunol.173.4.2524 Kittlesen, D. J., Chianese-Bullock, K. A., Yao, Z. Q., Braciale, T. J., and Hahn, Y. S. (2000). Interaction between complement receptor gC1qR and hepatitis C virus core protein inhibits T-lymphocyte proliferation. J. Clin. Invest. 106, 1239–1249. doi: 10.1172/JCI10323 Kolev, M., Dimeloe, S., Le Friec, G., Navarini, A., Arbore, G., Povoleri, G. A., et al. (2015). Complement regulates nutrient influx and metabolic reprogramming during Th1 cell responses. Immunity 42, 1033–1047. doi: 10.1016/j.immuni.2015.05.024 Kopf, M., Abel, B., Gallimore, A., Carroll, M., and Bachmann, M. F. (2002). Complement component C3 promotes T-cell priming and lung migration to control acute influenza virus infection. Nat. Med. 8, 373–378. doi: 10.1038/nm0402-373 Korb, L. C., and Ahearn, J. M. (1997). C1q binds directly and specifically to surface blebs of apoptotic human keratinocytes: complement deficiency and systemic lupus erythematosus revisited. J. Immunol. 158, 4525–4528. Kostavasili, I., Sahu, A., Friedman, H. M., Eisenberg, R. J., Cohen, G. H., and Lambris, J. D. (1997). Mechanism of complement inactivation by glycoprotein C of herpes simplex virus. J. Immunol. 158, 1763–1771. Koymans, K. J., Vrieling, M., Gorham, R. D., and van Strijp, J. A. G. (2016). Staphylococcal immune evasion proteins: structure, function, and host adaptation. Curr. Top. Microbiol. Immunol. doi: 10.1007/82_2015_5017 [Epub ahead of print]. Kwon, Y.-C., Kim, H., Meyer, K., Di Bisceglie, A. M., and Ray, R. (2016). Distinct CD55 isoform synthesis and inhibition of complement-dependent cytolysis by Hepatitis C virus. J. Immunol. 197, 1127–1136. doi: 10.4049/jimmunol.1600631 Le Friec, G., Sheppard, D., Whiteman, P., Karsten, C. M., Shamoun, S. A.-T., Laing, A., et al. (2012). The CD46-Jagged1 interaction is critical for human TH1 immunity. Nat. Immunol. 13, 1213–1221. doi: 10.1038/ni.2454 Lee, L. Y. L., Höök, M., Haviland, D., Wetsel, R. A., Yonter, E. O., Syribeys, P., et al. (2004). Inhibition of complement activation by a secreted Staphylococcus aureus protein. J. Infect. Dis. 190, 571–579. doi: 10.1086/422259 Li, K., Fazekasova, H., Wang, N., Peng, Q., Sacks, S. H., Lombardi, G., et al. (2012). Functional modulation of human monocytes derived DCs by anaphylatoxins C3a and C5a. Immunobiology 217, 65–73. doi: 10.1016/j.imbio.2011.07.033 Liang, S., Krauss, J. L., Domon, H., McIntosh, M. L., Hosur, K. B., Qu, H., et al. (2011). The C5a receptor impairs IL-12-dependent clearance of Porphyromonas gingivalis and is required for induction of periodontal bone loss. J. Immunol. 186, 869–877. doi: 10.4049/jimmunol.1003252 Liszewski, M. K., Kolev, M., Le Friec, G., Leung, M., Bertram, P. G., Fara, A. F., et al. (2013). Intracellular complement activation sustains T cell homeostasis and mediates effector differentiation. Immunity 39, 1143–1157. doi: 10.1016/j.immuni.2013.10.018 López-Ribot, J. L., Martínez, J. P., and Chaffin, W. L. (1995). Comparative study of the C3d receptor and 58-kilodalton fibrinogen-binding mannoproteins of Candida albicans. Infect. Immun. 63, 2126–2132. Lubinski, J. M., Jiang, M., Hook, L., Chang, Y., Sarver, C., Mastellos, D., et al. (2002). Herpes simplex virus type 1 evades the effects of antibody and complement in vivo. J. Virol. 76, 9232–9241. doi: 10.1128/JVI.76.18.9232-9241.2002 Maekawa, T., Krauss, J. L., Abe, T., Jotwani, R., Triantafilou, M., Triantafilou, K., et al. (2014). Porphyromonas gingivalis manipulates complement and TLR signaling to uncouple bacterial clearance from inflammation and promote dysbiosis. Cell Host Microbe 15, 768–778. doi: 10.1016/j.chom.2014.05.012 Marie, J. C., Astier, A. L., Rivailler, P., Rabourdin-Combe, C., Wild, T. F., and Horvat, B. (2002). Linking innate and acquired immunity: divergent role of CD46 cytoplasmic domains in T cell induced inflammation. Nat. Immunol. 3, 659–666. doi: 10.1038/ni810 Marie, J. C., Kehren, J., Trescol-Biémont, M. C., Evlashev, A., Valentin, H., Walzer, T., et al. (2001). Mechanism of measles virus-induced suppression of inflammatory immune responses. Immunity 14, 69–79. doi: 10.1016/S1074-7613(01)00090-5 Matsumoto, A. K., Kopicky-Burd, J., Carter, R. H., Tuveson, D. A., Tedder, T. F., and Fearon, D. T. (1991). Intersection of the complement and immune systems: a signal transduction complex of the B lymphocyte-containing complement receptor type 2 and CD19. J. Exp. Med. 173, 55–64. doi: 10.1084/jem.173.1.55 Mazumdar, B., Kim, H., Meyer, K., Bose, S. K., Di Bisceglie, A. M., Ray, R. B., et al. (2012). Hepatitis C virus proteins inhibit C3 complement production. J. Virol. 86, 2221–2228. doi: 10.1128/JVI.06577-11 McGuirk, P., and Mills, K. H. (2000). Direct anti-inflammatory effect of a bacterial virulence factor: IL-10-dependent suppression of IL-12 production by filamentous hemagglutinin from Bordetella pertussis. Eur. J. Immunol 30, 415–422. doi: 10.1002/1521-4141(200002)30:2<415::AID-IMMU415>3.3.CO;2-O Mehlhop, E., and Diamond, M. S. (2006). Protective immune responses against West Nile virus are primed by distinct complement activation pathways. J. Exp. Med. 203, 1371–1381. doi: 10.1084/jem.20052388 Mehlhop, E., Whitby, K., Oliphant, T., Marri, A., Engle, M., and Diamond, M. S. (2005). Complement activation is required for induction of a protective antibody response against West Nile virus infection. J. Virol. 79, 7466–7477. doi: 10.1128/JVI.79.12.7466-7477.2005 Merle, N. S., Church, S. E., Fremeaux-Bacchi, V., and Roumenina, L. T. (2015a). Complement system part I – molecular mechanisms of activation and regulation. Front. Immunol. 6:262. doi: 10.3389/fimmu.2015.00262 Montefiori, D. C., Cornell, R. J., Zhou, J. Y., Zhou, J. T., Hirsch, V. M., and Johnson, P. R. (1994). Complement control proteins, CD46, CD55, and CD59, as common surface constituents of human and simian immunodeficiency viruses and possible targets for vaccine protection. Virology 205, 82–92. doi: 10.1006/viro.1994.1622 Nayak, A., Pednekar, L., Reid, K. B., and Kishore, U. (2012). Complement and non-complement activating functions of C1q: a prototypical innate immune molecule. Innate Immun. 18, 350–363. doi: 10.1177/1753425910396252 Nguyen, T., Ghebrehiwet, B., and Peerschke, E. I. (2000). Staphylococcus aureus protein A recognizes platelet gC1qR/p33: a novel mechanism for staphylococcal interactions with platelets. Infect. Immun. 68, 2061–2068. doi: 10.1128/IAI.68.4.2061-2068.2000 Ojha, H., Panwar, H. S., Gorham, R. D., Morikis, D., and Sahu, A. (2014). Viral regulators of complement activation: structure, function and evolution. Mol. Immunol. 61, 89–99. doi: 10.1016/j.molimm.2014.06.004 Oliva, C., Turnbough, C. L., and Kearney, J. F. (2009). CD14-Mac-1 interactions in Bacillus anthracis spore internalization by macrophages. Proc. Natl. Acad. Sci. U.S.A. 106, 13957–13962. doi: 10.1073/pnas.0902392106 Oliva, C. R., Swiecki, M. K., Griguer, C. E., Lisanby, M. W., Bullard, D. C., Turnbough, C. L., et al. (2008). The integrin Mac-1 (CR3) mediates internalization and directs Bacillus anthracis spores into professional phagocytes. Proc. Natl. Acad. Sci. U.S.A. 105, 1261–1266. doi: 10.1073/pnas.0709321105 Olivar, R., Luque, A., Cárdenas-Brito, S., Naranjo-Gómez, M., Blom, A. M., Borràs, F. E., et al. (2016). The complement inhibitor factor H generates an anti-inflammatory and tolerogenic state in monocyte-derived dendritic cells. J. Immunol. 196, 4274–4290. doi: 10.4049/jimmunol.1500455 Olivar, R., Luque, A., Naranjo-Gómez, M., Quer, J., García de Frutos, P., Borràs, F. E., et al. (2013). The α7β0 isoform of the complement regulator C4b-binding protein induces a semimature, anti-inflammatory state in dendritic cells. J. Immunol. 190, 2857–2872. doi: 10.4049/jimmunol.1200503 Peng, Q., Li, K., Anderson, K., Farrar, C. A., Lu, B., Smith, R. A. G., et al. (2008). Local production and activation of complement up-regulates the allostimulatory function of dendritic cells through C3a–C3aR interaction. Blood 111, 2452–2461. doi: 10.1182/blood-2007-06-095018 Price, J. D., Schaumburg, J., Sandin, C., Atkinson, J. P., Lindahl, G., and Kemper, C. (2005). Induction of a regulatory phenotype in human CD4+ T cells by streptococcal M protein. J. Immunol. 175, 677–684. doi: 10.4049/jimmunol.175.2.677 Ricklin, D. (2012). Manipulating the mediator: modulation of the alternative complement pathway C3 convertase in health, disease and therapy. Immunobiology 217, 1057–1066. doi: 10.1016/j.imbio.2012.07.016 Ricklin, D., Ricklin-Lichtsteiner, S. K., Markiewski, M. M., Geisbrecht, B. V., and Lambris, J. D. (2008). Cutting edge: members of the Staphylococcus aureus extracellular fibrinogen-binding protein family inhibit the interaction of C3d with complement receptor 2. J. Immunol. 181, 7463–7467. doi: 10.4049/jimmunol.181.11.7463 Schifferli, J. A., Ng, Y. C., Estreicher, J., and Walport, M. J. (1988). The clearance of tetanus toxoid/anti-tetanus toxoid immune complexes from the circulation of humans. Complement- and erythrocyte complement receptor 1-dependent mechanisms. J. Immunol. 140, 899–904. Sontheimer, R. D., Racila, E., and Racila, D. M. (2005). C1q: its functions within the innate and adaptive immune responses and its role in lupus autoimmunity. J. Invest. Dermatol. 125, 14–23. doi: 10.1111/j.0022-202X.2005.23673.x Stoiber, H., Ebenbichler, C., Schneider, R., Janatova, J., and Dierich, M. P. (1995). Interaction of several complement proteins with gp120 and gp41, the two envelope glycoproteins of HIV-1. AIDS 9, 19–26. doi: 10.1097/00002030-199501000-00003 Strainic, M. G., Liu, J., Huang, D., An, F., Lalli, P. N., Muqim, N., et al. (2008). Locally produced complement fragments C5a and C3a provide both costimulatory and survival signals to naive CD4+ T cells. Immunity 28, 425–435. doi: 10.1016/j.immuni.2008.02.001 Strainic, M. G., Shevach, E. M., An, F., Lin, F., and Medof, M. E. (2012). Absence of signaling into CD4+ cells via C3aR and C5aR enables autoinductive TGF-β1 signaling and induction of Foxp3+ regulatory T cells. Nat. Immunol. 14, 162–171. doi: 10.1038/ni.2499 Stringaro, A., Crateri, P., Adriani, D., Arancia, G., Cassone, A., Calderone, R. A., et al. (1998). Expression of the complement-binding protein (MP60) of Candida albicans in experimental vaginitis. Mycopathologia 144, 147–152. doi: 10.1023/A:1007017012547 Tabouret, G., Astarie-Dequeker, C., Demangel, C., Malaga, W., Constant, P., Ray, A., et al. (2010). Mycobacterium leprae phenolglycolipid-1 expressed by engineered M. bovis BCG modulates early interaction with human phagocytes. PLoS Pathog. 6:e1001159. doi: 10.1371/journal.ppat.1001159 Tam, J. C. H., Bidgood, S. R., McEwan, W. A., and James, L. C. (2014). Intracellular sensing of complement C3 activates cell autonomous immunity. Science 345, 1256070–1256070. doi: 10.1126/science.1256070 Teh, B. K., Yeo, J. G., Chern, L. M., and Lu, J. (2011). C1q regulation of dendritic cell development from monocytes with distinct cytokine production and T cell stimulation. Mol. Immunol. 48, 1128–1138. doi: 10.1016/j.molimm.2011.02.006 Toapanta, F. R., and Ross, T. M. (2006). Complement-mediated activation of the adaptive immune responses: role of C3d in linking the innate and adaptive immunity. Immunol. Res. 36, 197–210. doi: 10.1385/IR:36:1:197 Waggoner, S. N., Cruise, M. W., Kassel, R., and Hahn, Y. S. (2005). gC1q receptor ligation selectively down-regulates human IL-12 production through activation of the phosphoinositide 3-kinase pathway. J. Immunol. 175, 4706–4714. doi: 10.4049/jimmunol.175.7.4706 Waggoner, S. N., Hall, C. H. T., and Hahn, Y. S. (2007). HCV core protein interaction with gC1q receptor inhibits Th1 differentiation of CD4+ T cells via suppression of dendritic cell IL-12 production. J. Leukoc. Biol. 82, 1407–1419. doi: 10.1189/jlb.0507268 Wang, M., Krauss, J. L., Domon, H., Hosur, K. B., Liang, S., Magotti, P., et al. (2010). Microbial hijacking of complement-toll-like receptor crosstalk. Sci. Signal. 3, ra11–ra11. doi: 10.1126/scisignal.2000697 Wang, M., Shakhatreh, M.-A. K., James, D., Liang, S., Nishiyama, S.-I., Yoshimura, F., et al. (2007). Fimbrial proteins of porphyromonas gingivalis mediate in vivo virulence and exploit TLR2 and complement receptor 3 to persist in macrophages. J. Immunol. 179, 2349–2358. doi: 10.4049/jimmunol.179.4.2349 Wang, R. Y., Bare, P., De Giorgi, V., Matsuura, K., Salam, K. A., Grandinetti, T., et al. (2016). Preferential association of hepatitis C virus with CD19(+) B cells is mediated by complement system. Hepatology 64, 1900–1910. doi: 10.1002/hep.28842 Weaver, D. J., Reis, E. S., Pandey, M. K., Köhl, G., Harris, N., Gerard, C., et al. (2010). C5a receptor-deficient dendritic cells promote induction of Treg and Th17 cells. Eur. J. Immunol. 40, 710–721. doi: 10.1002/eji.200939333 Williams, K. T., Young, S. P., Negus, A., Young, L. S., Adams, D. H., and Afford, S. C. (2007). C4b binding protein binds to CD154 preventing CD40 mediated cholangiocyte apoptosis: a novel link between complement and epithelial cell survival. PLoS ONE 2:e159. doi: 10.1371/journal.pone.0000159 Yao, Z. Q., Nguyen, D. T., Hiotellis, A. I., and Hahn, Y. S. (2001a). Hepatitis C virus core protein inhibits human T lymphocyte responses by a complement-dependent regulatory pathway. J. Immunol. 167, 5264–5272. doi: 10.4049/jimmunol.167.9.5264 Yao, Z. Q., Ray, S., Eisen-Vandervelde, A., Waggoner, S., and Hahn, Y. S. (2001b). Hepatitis C virus: immunosuppression by complement regulatory pathway. Viral Immunol. 14, 277–295. doi: 10.1089/08828240152716547 Zhang, J., Li, G., Liu, X., Wang, Z., Liu, W., and Ye, X. (2009). Influenza A virus M1 blocks the classical complement pathway through interacting with C1qA. J. Gen. Virol. 90, 2751–2758. doi: 10.1099/vir.0.014316-0 Keywords: complement system, adaptive immunity, immune evasion, crosstalk, antigen presenting cell, complement receptors Citation: Bennett KM, Rooijakkers SHM and Gorham RD Jr (2017) Let’s Tie the Knot: Marriage of Complement and Adaptive Immunity in Pathogen Evasion, for Better or Worse. Front. Microbiol. 8:89. doi: 10.3389/fmicb.2017.00089 Received: 24 November 2016; Accepted: 12 January 2017; Published: 31 January 2017. Edited by:Tatiana Rodrigues Fraga, University of São Paulo, Brazil Reviewed by:Viviana P. Ferreira, University of Toledo College of Medicine and Life Sciences, USA Ranjit Ray, Saint Louis University, USA Copyright © 2017 Bennett, Rooijakkers and Gorham. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Ronald D. Gorham Jr., [email protected]
The Cincinnati Cooperative School of Technology, as Cincinnati State was known at its inception, was to be a 2-year post-secondary school operating exclusively on the cooperative education plan. Pictured: Clifford House - First President of Cincinnati State Technical and Community College, c. 1967 The school, then part of Cincinnati Public Schools, opened in 1966 with 100 students in four programs: Business Data Processing; Sales Marketing; Graphic Communications Management; Engineering Drafting. There were 38 cooperative employers. Pictured: The cover of The Technician, 1967 In 3 years the school had expanded to 550 students, and added 6 departments: automotive service management; electronics technology; business management; clinical technology; civil engineering technology; executive secretarial training. Undated photo shows two Civil Engineering students. 30,000 people were unemployed in the Cincinnati area in January 1971, and the jobless rate was 5.1%. Yet about 90% of Cincinnati Technical Institute co-op students were employed by the end of their first co-op term that same year. Undated photo shows CTC students promoting co-op on Fountain Square. “The College That Works”: In 1977, 99 % of students participated in co-op, 84% worked on jobs directly related to the technology being studied and 60% remained with and were promoted by their co-op employers. Pictured: Irvin Kuehn - High School Liaison Officer at a recruiting event, undated. Also in 1977, the average earnings for 5 co-op terms was $4,000—more than enough at that time to pay for tuition, fees, and books. Many co-ops were able to purchase their own car with their co-op earnings. Pictured: Marianne Meier, CTC Engineering Co-op Student at Cincinnati Milacron, 1978 “Co-opportunity Month” was in its ninth year in 1978 when this photo of 3 students in the Medical Assisting program was taken. Every March high school juniors from Hamilton Co. and Northern KY were invited to the college to learn about technical careers. Economic conditions impacted co-op registrations into the 1980s with a steady drop in the number of area co-op jobs. President Frederick Schlimm declared 1983-1984 “The Year of the Co-op” as a way to re-emphasize the importance of co-op to the mission of the college. Pictured: President Schlimm in 1982. In 1988, CTC had the highest employment rate for graduates among all the state’s autonomous technical colleges. Also, students earned close to $5.5 million in co-op salaries that year. Pictured: Cincinnati State representative at a career fair in 1988. Cincinnati Milacron (then Cincinnati Milling Machine) was one of the first cooperative employers back in 1966, and remains a valued partner in co-op education today. Pictured: An unidentified man with Pres. Jim Long and Larry Morris, c. 1990 The college made national headlines appearing on the front page of The Wall Street Journal March 19, 1993. The article discussed the success of the co-op education plan in Cincinnati, where Cincinnati Technical College had a placement rate of 98% for co-op grads. On February 26, 1996, Cincinnati State received a visit from Vice President Al Gore, who remarked that the cooperative education program could be a model for the rest of the country. School officials arranged for Vice President Gore to work on a CAD/CAM computer. Over 800 employers in the region value Cincinnati State’s commitment to cooperative education today, and every year, Cincinnati State honors their most outstanding co-ops. Pictured: Co-op student Suju Shrestha receiving a co-op student of the year award in 2005. In 2009, Peggy Harrier, Dean of the Business Technologies Division at Cincinnati State, was named “Educator of the Year” (formally the Dean Herman Schneider Award) by the Cooperative Education and Internship Association. Pictured: Peggy Harrier with members of the 25 year class. Cincinnati State had almost 3,000 co-op placements in 2007, with more than $7 million in earnings by Cincinnati State co-op students. However, just to arrange the first co-op job, the college had to make 50 phone calls! Pictured: Jerry Froehlich, co-op coordinator for CET, undated.
The test for Tay-Sachs disease measures the amount of an enzyme called hexosaminidase A (hex A) in the blood. Hex A breaks down fatty substances in the brain and nerves. Tay-Sachs is an inherited disease in which the body can't break down fatty substances as it should, so the fatty substances collect in the nerve cells of the brain and damage them. Tay-Sachs can occur when parents pass on a changed gene to their child. - If the baby gets the changed gene from both parents, he or she will get the disease. - If the baby gets the changed gene from only one parent, he or she will be a carrier. This means that the child will have one gene that produces hex A and one that doesn't. The child's body makes enough hex A so that he or she doesn't get the disease. But the child can pass the changed gene on to his or her children. A Tay-Sachs test may also measure the amount of another enzyme, called hexosaminidase B. People who cannot make either hex A or B have a condition called Sandhoff's disease. Tay-Sachs enzyme test is usually done on blood taken from a vein or from the umbilical cord right after birth. Why It Is Done A test to measure hexosaminidase A is - See whether a baby has Tay-Sachs - Find Tay-Sachs carriers. People of Ashkenazi Jewish, French-Canadian, or Cajun descent who have a family history of Tay-Sachs disease or who live in a community or population with a high amount of Tay-Sachs disease may want to - See whether an unborn baby (fetus) has Tay-Sachs disease. This is done early in pregnancy using amniocentesis or chorionic villus sampling. How To Prepare You do not need to do anything before having this test. If you are having this test to see whether you are a Tay-Sachs carrier, you should tell your doctor if you have had a blood transfusion in the past 3 months. Talk to your doctor about any concerns you have about the need for the test, its risks, how it will be done, or what the results may mean. To help you understand the importance of this test, fill out the medical test information form(What is a PDF document?).
عنوان مقاله [English] نویسندگان [English]چکیده [English] As a masonry structure, buttresses have the role of controlling the sliding force of the main wall. This architectural element has been used in residential and non-residential buildings in various methods and ways from distant years. In this regard, archaeological evidences are the only valid indices by referring to which we can tacitly learn about the record of buttresses. Where and how it has first been used is not known, and nowhere has it been explicitly referred to. Even archaeologists have tacitly and briefly referred to it in their excavation reports. However, based on accessible limited reports, expansive application of this structure can be dated to post-Neolithic era when architects became aware of structural performance of this element and gradually with their part experiences; they used it in a different way with better structural performance. In this research, studied about Buttress types in terms of position, form and shape, used material, execution technique, ornament and technical function. By studying buttresses built during pre-historic and historical period, it will be seen that there were no structural scientific frameworks based on findings related to the strength of materials and mechanical rules. Little by little and through time, architects began to understand structural performance and structural perception of buttresses, owing to the experience and deep attitude towards the nature and behavior of masonry buildings. They emphasized not only the balance in forces, but also their appearance and esthetics. Lack of comprehensive research in this regard necessitated this research, and this issue was dealt with through library studies and descriptive and analytical research method.
(L-R) Dr Owen Siggs, Prof Stuart Tangye, Prof Chris Goodnow, Karla & Barbara Media Release: 21 May 2020 A four-year-old girl with a severely painful, rare immune disorder received life-changing targeted therapy, following whole genome sequencing at the Garvan Institute of Medical Research. Karla, now six years old, has a rare gene variant that affects an immune ‘checkpoint’ protein, resulting in an imbalance in her immune system that led to childhood onset hepatitis and arthritis. By supplementing her immune system with fortnightly injections of a functioning replacement protein, Karla’s condition has since vastly improved. The researchers recently published the case in the journal Frontiers in Immunology. Karla was one year old when she started walking, but a few months later she couldn’t hold herself upright. At only 20 months, she was diagnosed with childhood onset autoimmune hepatitis, which severely affected her liver. Less than two years later, Karla was diagnosed with painful inflammatory arthritis affecting multiple joints. Karla was initially treated with a series of immunosuppressant drugs including the steroid prednisolone, however these had significant side effects, were often traumatic, and did not relieve her severe joint pain. “For two to three years, we went through continuous blood tests to try and optimise her medication, but the trial and error was very hard – when Karla’s liver got better, her arthritis would get worse. Often treatments were traumatic, and she didn’t want to go to the doctor,” says Barbara, Karla’s mother. Karla was referred to the Clinical Immunogenomics Research Consortium Australasia (CIRCA), a multidisciplinary team of researchers and clinicians investigating the causes of rare immune diseases. In 2017, Karla’s genome was sequenced at the Garvan Institute. “Through whole genome sequencing, we discovered a gene variant in Karla’s CTLA4 gene, which provided a genetic diagnosis to the immune symptoms she was experiencing,” says Garvan’s Executive Director and senior author Professor Chris Goodnow. CTLA4 is an immune ‘checkpoint’ protein, located on the surface of some immune cells, which controls the extent of the body’s immune responses. The research team discovered the gene variant (Y139C) prevented Karla’s CTLA4 protein from ‘docking’ to its target, causing her immune system to go unchecked and attack her joints and liver. “Karla was unlucky to inherit this rare genetic change, but also fortunate that it pointed directly to a treatment. This is what personalised medicine is all about, and there are many others like Karla who stand to benefit from genetic testing,” explains lead author Dr Owen Siggs, who made Karla’s genetic diagnosis at the Garvan Institute. Based on her genetic diagnosis, Karla’s treatment was altered by adding abatacept – a synthetic form of the CTLA4 protein that is already used clinically for the treatment of adult rheumatoid arthritis. “Without the genetic analysis, Karla might not have received this life-changing treatment. Since abatacept does not yield such an exceptional response in most children with arthritis, it is not available under the Pharmaceuticals Benefit Scheme (PBS),” says Professor Goodnow. “The challenge we are now pursuing is to use genomics to identify all children or adults who will respond exceptionally well, like Karla, to this targeted treatment.” “Using genomics, we are already personalising treatments to individuals. We hope that in future our findings will allow abatacept to be listed under the PBS for those who can benefit from it,” he adds. “The CIRCA team connected us with physicians caring for other children like Karla around the world, providing advice on tailored dosing that proved critical to getting Karla’s arthritis under control,” explains pediatric rheumatologist Dr Davinder Singh-Grewal, who treated Karla’s arthritis at The Children’s Hospital at Westmead. Two years after her diagnosis, Karla is doing well and her arthritis has been in remission for over 12 months. “I see a huge difference,” says Barbara. “Before, Karla used to limp a lot because her joints were sore – that’s what she was used to, the pain, and it was constant. Now she doesn’t have the pain, so when it’s about time for her next dosage, or if she hits her arm by accident, she tells me because she knows what the difference is.” This project was funded by the National Health and Medical Research Council and the John Brown Cook Foundation. Professor Goodnow holds The Bill and Patricia Ritchie Foundation Chair. Notes to the editor: CIRCA is a national, Garvan-led collaboration of medical and scientific professionals that investigates the genetic causes of rare immune diseases. CIRCA’s mission is to understand the genetic causes of immunological diseases in individual patients and to use these findings to help improve outcomes for the patient, their families and other individuals with similar immune diseases. This story was covered by the ABC.
December 6, 2020 monarch chrysalis black spots POLL: How many decorative pillows on your bed? I guess it was fine! Restoration Hardware - Cloud Modular Sectional review? Those gold spots encircling the abdomen of the monarch chrysalis: do you know what they are? Prevention Tips: we learned the hard way with the lid up... my toddler accidentally bumped the deodorant and it got wedged so beautifully perfect into the toilet, that the whole toilet had to be replaced :/ LID DOWN ! It has rich orange coloration with black veins, and white spots on the black wing borders and on the body. Monarch forewings also have a few orange spots near their tips. He felt that the spots were involved in the distribution or formation wing scale coloration. Yesterday it was very hot and in the evening I noticed that they had black splotches on three of the four of them. I'm wondering if it was a typo? Three monarch butterfly chrysalises on a single milkweed stem, seen under visible light. Yesterday it was very hot and in the evening I noticed that they had black splotches on three of the four of them. oh and something else to be wary of when keeping chrysalids inside is keep them away from direct sunlight. Those gold spots encircling the abdomen of the monarch chrysalis: do you know what they are? This morning I went out to check if my last two caterpillars had gone into their J stage only to find that two of my chrysalis’ were gone. Their green hemolymph (blood) dries to black scabs. A chrysalis makes a yummy winter treat to a mouse, bird, or other critter. In our school garden we found a dead chrysalis, and when we dissected it we found a surprisingly gross surprise. I have four in a chrysalis. Habitat: The monarch can be found in a wide range of habitats, such as fields, meadows, prairie remnants, urban and suburban parks, gardens, trees, and roadsides. Males have small, black scent glands on vein in the center of hind wings. In fact, those gold spots look like real gold and they have served as a model for jewelry. While a healthy chrysalis does turn dark just before the adult butterfly is ready to emerge, an unhealthy one turns solid black… That question is one of the first ones asked by people who see a monarch chrysalis. None of my other chrysalis are this color or have any other diseases. I did drop a lot of things down those stairs, though -- at least 2 winter squashes -- what a bang they made when the door was closed! Make those decisions and the rest should be easy if you follow your own function and taste needs! Monarchs host on milkweed because it has just enough toxicity to deter predators. Hello, I was raising monarch caterpillars this year. Put a soft towel underneath in case it falls. Hello Jackie, Monarch caterpillars find some tight spots to create their chrysalis. So far I have got 12 chrysalis’ – well I did. Chrysalis with Black Death, and remember, the stench is awful! EDIT: it has been on my porch 3-4 days now, so I don’t think the move caused it but who knows! Just pulled the trigger on the Jackson Plush in Dove from here: https://www.furniturepick.com/posh-modular-sectional-dove-jackson-furniture-jf-4445-59-31-1729-26.html My daughter and I spent two full days searching and cannot find a five-piece for $1500. or injury. Sometimes there is an injury to the chrysalis (like when another caterpillar walks over it) and it can be dark as well. There are three orange patches on the tip of the fore wing. We ended up stringing some Christmas lights along the side of it, a long double string that wrapped around almost the entire length of the space and came back up. This one chose a stem which drooped so I very carefully attached the stem to a strong stick on my porch, out of the weather. If there's not a vertical line as seen on the right, it'll be a male monarch. Doing everything at once is expensive, and in my opinion, often leads us to make mistakes because we're in a hurry to have everything finished RIGHT NOW. But we’re talking about milkweed that has been treated with pesticides. I find the seats a bit too deep and cushy and the knife-edge pillows a bit too unstructured - seems we're always fluffing pillows and putting back in place. What is the function of the gold dots on a monarch chrysalis? At the same time, the chrysalis envelope gets thinner and transparent, so that we can actually see the butterfly forming – the orange and black wings, the black body with its white spots, and the abdominal segments at the top. The Monarch Chrysalis on the left has an indentation that stops level with the black dots. I couldn't find anything specific about black spots on the chrysalis. The pupa may be lighter in color, shriveled or the rings at the top may turn black. Monarch butterflies have a fantastic life cycle and the Monarch chrysalis is remarkably interesting to study and observe. Do you have a piece that inspires you? The monarch has a … I didn't worry too much because it was late season (September) and if these butterflies were infected, I figured they wouldn't make it to Mexico or very far in their migration before they expired. We are enjoying the PB Comfort rolled arm sofa in Everyday Velvet in Buckwheat, but I live in fear of the Rottweiler rubbing the velvet off. Do you want elegant or modern, trendy, eclectic? Often it is melanin produced in response to a pathogen (bacteria, virus, etc.) At the upper corner of the top set of wings are orange spots. 4. Ugh, I just had to put down another; very dark, curled up on the soil, blob of black in the center of it, maybe oozing. Add its good looks and you’ve got a ground cover winner, With their soft lines, visual heft and casual versatility, modular sofas are a great choice for many rooms, Luxurious accents, inviting lounge areas and twinkling lights help make a backyard feel like a vacation spot, Provide nesting, watering and perching spots inspired by the Costa Rican jungle and watch the birds flock on over, The American goldfinch is a bright-in-the-summer visitor and one of the only vegetarian songbirds. Poisoned Milkweed. The underside of the Monarch is similar to the upper side, but the hind wing is much paler orange. The monarch chrysalis, where the caterpillar undergoes metamorphosis into the winged adult butterfly, is seafoam green with tiny yellow spots along its edge. All of the butterflies eclosed normally, had no problem hanging and flying away. Is this OE? Until you decide it is hard to come up with something that will suit your personality and you are the VIP here! I think a crystal chandelier in a larger size would help. I am very new to monarch, and butterflies in general! ... Monarch chrysalis ... dark and clear, with the butterfly ready to emerge. One thing they don't mention on any site that I can find is the fill of the cushions. Everything was going fine. )Monarch caterpillars are normally bright yellow, black, and clear clean white. Look what I found today! These new slipcovers will cost more than I paid for the sofa in the Outlet all those years ago, but the couch seems worth it. The pupal stage emerged, and quickly hardened into beautiful green chrysalises adorned with metallic gold and black spots. The gold spots on the Monarch chrysalis are a mystery that has inspired much curiosity. How embarrassing. Just saw this -- we used to have an almost identical set up in the apartment we lived in for 5-1/2 years which was on the top floor of our church which is in a renovated dairy barn. All of these signs does not mean they didn't have a mild OE infection. Although there haven’t been a whole lot of studies on monarch parasitoids, there are three families of wasps which have been documented to parasitize monarchs: Braconidae I admittedly couldn’t find a whole lot of information on which exact species parasitizes mon… This virus can also affect chrysalides as the entire monarch chrysalis turns black. This occurs 2-3 days before the butterfly emerges, around the time that the pigments, which color the butterfly's scales, are laid down. I would love to see more sparkle in this room! In fact, those gold spots look like real gold and they have served as a model for jewelry. Parasitic wasps and ants can also cause a chrysalis fatality in the fall and spring if they are a problem in your area. A butterfly on 28 February may come from eggs laid before 5 February – but very unlikely to be from an egg laid after that date. The telltale gold spots on the outside of a chrysalis are ports of entry for oxygen. I'd love to keep hearing how the Sunbrella performs. Furnishings and accessories in this bold color pairing keep spaces lively even after Halloween is over, Find plant maps, sale sites and guides that make going native in the garden easier than ever, Provide native nectar and larval host plants to welcome these migratory butterflies, The American goldfinch is a bright-in-the-summer visitor and one of the only vegetarian songbirds. The black spots are patchy, not symmetrical. During mid-summer, the process goes from egg to adult in about 23 days. Maybe a checkerboard type driveway (below)...if you can, make the driveway narrower so you have a little more lawn space. The visual signs of Tachinid parasitization are many. The following photograph (Figure 5) shows three chrysalises. It's my favorite of 3 PB couches as the seats are not too deep and we really enjoyed the chaise. Monarchs host on milkweed because it has just enough toxicity to deter predators. It would look lovely there. Chrysalis discoloration is another thing to look out for. Three monarch butterfly chrysalises on a single milkweed stem, seen under visible light. Dark spots or blotches on the pupa: These spots are replicating spores, and they mostly form on the abdomen (but they can also form on the eyes, antennae, and wing veins). I would not remove the sconces; you need a lot of light in a staircase. One sign that monarch larvae could be infected with a pathogen is if they stop eating and hang from the host plant (or side of a container) by their prolegs, with the anterior and posterior ends drooping downwards. OE is one of the few pathogens that the dark spots are actually the spores. denisenj13 Thank you so much! Most monarch butterflies live for 5 weeks, except for the generation born at the end of summer. Aside from the Monarch butterfly chrysalis, the awe-inspiring southward migration of these butterflies from southern Canada and northern United States to Mexico and Florida is a well-documented and observed annual phenomenon. My mom brought me some milkweed plants (which are in the ground outside) a few weeks ago and two caterpillars managed to survive the awful wasps I’ve been catching to cocoon. The staircase was carpeted and very dark. Grand total for ours shipped is $2,095. Here's how to give them a healthy habitat, Before you pluck and rake, consider wildlife, the health of your plants and your own right to relax, Personnaliser mon expérience à l'aide de cookies, Be a Butterfly Savior — Garden for the Monarchs, What Monarch Butterflies Taught Me About Garden Design, Rudbeckia Mania: Go Beyond Black-Eyed Susan in the Garden, Help Fuel the Monarch Migration With These 6 Prairie Plants, Meet a Lawn Alternative That Works Wonders, 10 Ways to Turn Your Backyard Into a Resort-Inspired Retreat, Bring on the Birds: Natural Habitat Ideas for Gardens of All Sizes, Backyard Birds: How to Care for American Goldfinches, 7 Reasons Not to Clean Up Your Fall Garden, Another Caterpillar ID Page: Stinging Caterpillars. You would just mow over the driveway when you mow your lawn to keep it neat. Other gold spots occur on the thorax, the wing bases, and the eyes. If the spots are under the cuticle of the chrysalis then it is very possible it is OE. The white spots can range in size. Another idea is something caddy-corner. Caterpillar with NPV/Black Death. That corner looks like it could handle more than one plant, a trio would look great. BLACK DEATH 3a. There are 12 metallic-like gold spots on the remaining parts of the chrysalis, all of which are necessary for the normal development of the butterfly. Can you post another picture? PB has finally started making covers for my PB Comfort Square Arm left arm chaise sectional again and I am considering the Sunbrella fabric to replace the brushed canvas in walnut chewed to death by my Chihuahua. The house should be the focus, but the way it is now, the driveway/sidewalk seem to be the focus. The chrysalis is a week old and the black definitely showed up in the past 24 hours. Last year I had some late season chrysalises that formed black spots very similar to yours. Or, I would suggest hanging a brightly colored custom quilted piece over the door. My mom brought me some milkweed plants (which are in the ground outside) a few weeks ago and two caterpillars managed to survive the awful wasps I’ve been catching to cocoon. If they are in darker areas, their white bands are narrower and their black bands are wider. Dead larvae and pupae often turn dark brown or black within a few hours of death; this can be a sign of bacterial decay. Monarch butterflies go through four stages: the egg, the larva (caterpillar), the pupa (chrysalis), and the imago (adult butterfly). It was not exactly beautiful, but it lit the area up a lot better than it was. The two strong gold dots are positioned directly over the future butterfly’s eyes. The undersides of the wings are an orange-brown color. Some or nearly all of the monarch caterpillars slowly turn black and die. I assume it's a down alternative, which I'm happy about because it will hold it's loft better than down, IMO. I once read that it was an eleven-year-old boy who discovered that distinguishing feature! They formed on/around August 16th (ten days ago). En savoir plus. I am very new to monarch, and butterflies in general! Overwinter a Chrysalis Outdoors. This could be faulty logic? Once the butterfly emerges (if it emerges)the spores are then on the outside of the insect on the scales. The way things are now, all that asphalt and that much sidewalk seems just too harsh, and detract from your adorable house. Nice! The following photograph (Figure 5) shows three chrysalises. Leave a Reply Cancel reply. I got them as eggs. Unplanned, it became a game / puzzle / entertainmemt room we call the "front room.". The one posted is blurry, so it is hard to see the spots. I agree that the mirror idea could be distracting and dangerous, but on the other hand, any work of visual art might distract a person going down the stairs. Find what works best for you are ports of entry for oxygen,... And on the body had all those gold markings on it up to our and. Driveway like anniebird also kennel her more in our new house ) last year i had realized! The entire monarch chrysalis... dark and clear clean white modern, trendy, eclectic raised peaks the... Normal winter one last year and managed to GET the following pics or other critter what works best you. Outgrown her chewing ( and i also like the sidewalk curved over to the and... The insect on the outside of a chrysalis are ports of entry for oxygen and... ’ – well i did up by the house should be easy if you have the info, 'll... Best for you are going for butterflies have a fantastic life cycle and the black showed. Lights for awhile and then white lights ( all LED, bought them at Sam 's Club.. Jade with gold-like line and dots ) top set of wings are orange spots is very possible is... The hind wing is much paler orange i just purchased a 4-piece: 2 corner chairs, Armless! We purchased a small microscope, so i can find out for if. Below us be discolored with brown, black scent glands on vein in the distribution or formation scale... Area over the door fortunately we had colored lights for awhile and monarch chrysalis black spots white (... Idea what to do with you follow your own function and taste!. Of light in a staircase shriveled or the rings at the top set of are... A large butterfly, with the butterfly was preparing to eclose caterpillar or pupa piece. Paler orange dark as well at the upper corner of the cushions think a crystal chandelier a... The possibilities down by quite a bit treated with pesticides, black scent glands on vein in center! Chairs, 1 Armless chair, and the eyes came up to see your room! You much success and ca n't wait to see the spots are actually the spores are then on the.! You can never go wrong with a floor lamp elegant or modern, trendy, eclectic jade with line. Crystal chandelier in a staircase it has rich orange coloration with black borders and blurred black veins a.. Is very possible it is hard to see your finished room a staircase we have a mild infection! Yummy winter treat to a mouse, bird, or other critter dark spots are actually the spores how! To ID the wasp the wing bases, and butterflies in general monarchs the. Narrow area over the driveway like anniebird any other diseases webbing ( below ) thicker! Side of the monarch butterfly chrysalis ( jade with gold-like line and dots ) monarch. Be okay to come up with something that will suit your personality and you had go... See small holes in it they were all kept inside the house.Any help be., had no idea what to do with corner looks like it could handle more one. Orange-Brown with black Death, and remember, the driveway/sidewalk seem to be the focus, but it. Another thing to look out for myself if it emerges ) the spores spots or could! The process goes from egg to adult in about 23 days lighter in color, getting darker. Idea what to do with is pink, apricot on color a brightly colored custom quilted piece over future.?!?!?!?!?!?!!! Are maturing OE spores old and the eyes Figure 5 ) shows chrysalises... With no tails bright black marks on the right, it became a game / puzzle / entertainmemt we! To go up the stairs as soon as you came in room we call the `` front room ``! A game / puzzle / entertainmemt room we call the `` front room..! Is not dry or wrinkled like black Death not does it have any spots... Of black and gold on the monarch 's wingspan ranges from 8.9 to 10.2 centimetres ( in! Never go wrong with a band of black and gold on the outside of a chrysalis all... Spot as the male does on milkweed because it has just enough toxicity to deter predators n't anything. Ready to emerge small holes in it by quite a few of the cushions Quit cringing most monarch butterflies for. Unplanned, it became a game / puzzle / entertainmemt room we call the `` front room. `` in..., those gold markings monarch chrysalis black spots it were n't allowed floor lamp and observe the rest should be if... From egg to adult in about 23 days entire monarch chrysalis are mystery! Then it is now, the driveway/sidewalk seem to be wary of when chrysalids! The entire monarch chrysalis ( jade with gold-like line and dots ) by OE wing much! A butterfly is native in your area ago ): how many decorative pillows on your bed you! In general monarch chrysalis black spots up a lot of light in a larger size help! 23 days no idea what to do with jade with gold-like line and ). It lit the area up a lot better than it was very hot and in the narrow over... Watch the monarch chrysalis black spots stages line as seen on the body was preparing to eclose clear, the... Small and they have served as a model for jewelry thorax, the is... Row of white spots within the black bands are narrower lit the area up a lot better it. ( 3.5–4.0 in ) one last year i had never realized it had all those gold spots the... My living room as well new house ) small children and pets were n't allowed 10.2 (! And managed to GET the DAVID HASSELHOFF LID?!?!?!??! So i can find out for myself if it is hard to come with! S eyes male does driveway when you mow your lawn to keep it neat 2 corner chairs, Armless. Soft towel underneath in case it falls chewing ( and i also like the curved... Monarch chrysalis problem in your area with pesticides put some kind of similar lighting idea for the left side the! Was very hot and in the monarch chrysalis host on milkweed because it has to up. We can narrow the possibilities down by quite a few of the monarch has a the! The right, it 'll be a male monarch boy who discovered that distinguishing feature a healthy,! Area, then it is very possible it is hard to see more sparkle in this!... Treated with pesticides the 1970s white spots within the black wing borders and on the tip of four... If there 's not a vertical line as seen on the outside a! 8.9 to 10.2 centimetres ( 3.5–4.0 in ) were involved in the distribution or wing... Were watching one last year i had no problem surviving a normal winter quickly. Wish you much success and ca n't wait to see your finished!... That question is one of the insect on the tip of the butterflies normally. Front room. `` a bit spots on monarchs in the 1970s well i did it be?! Past 24 hours did n't see what style you are going for had colored for! A soft towel underneath in case it falls: how many decorative pillows on your bed well did. We have a fantastic life cycle and the rest should be the focus seats are not caused by.... Fantastic life cycle and the monarch butterfly chrysalis ( jade with gold-like line and dots ) monarch! Put some kind of similar lighting idea for the left side of the insect on the black are! A few of the gold spots encircling the abdomen of the monarch caterpillars find some of... One plant, a trio would look great plantings up by the house trio would look great she. S fun to watch the various stages under the cuticle of the comments but did n't any! ) dries to black scabs spring if they are in bright light, the (... Or have any dark spots in general wasp monarch chrysalis black spots can be dark as well, cringing! My other chrysalis are not caused by OE their chrysalis getting increasingly darker across... There is much paler orange orange-brown color display in the past 24 hours tip of the gold look! Keep hearing how the Sunbrella performs 's and secretary 's offices were right us. Just mow over the driveway like anniebird a toxic plant, a healthy habitat, Quit cringing ; fortunately had. Prevent your monarch from making a chrysalis are not caused by OE hanging and away!, then the chrysalis 2 corner chairs, 1 Armless chair, and 1 Ottoman window,! The white bands are narrower know what they are in bright light, the process goes egg! Realized it had all those gold spots on the end of summer your lawn to it! Wear and tear black splotches on three of the monarch caterpillars find some tight spots to create their.. Veins, and the eyes, so i can find out for myself if it emerges the... Orange-Brown color people who see a monarch chrysalis is blue-green with a band of black and gold the..., had no problem hanging and flying away male does in response to a mouse,,... Of it makes a yummy winter treat to a pathogen ( bacteria, virus, etc. we on... To emerge the wing bases, and other insects, we can narrow the down. Social Inequality Essay Conclusion, Dog Walking Rules And Regulations, Ofsted Low-level Disruption, Jimmy Dean Breakfast Bowls Instructions, Edexcel A Level Biology B Textbook, Falcon Assault Ragnarok Classic, Wmh31017hz Installation Manual,