proba
float64 0.5
1
| text
stringlengths 16
174k
|
---|---|
0.999998 | The New Dr. Who? The BBC is reporting that Former Doctor Who Tom… - When pigs fly.
The BBC is reporting that Former Doctor Who Tom Baker says that Eddie Izzard is to be the next doctor for the TV show that starts 2005. Eddie is known for being a cross-dressing comedian in Britan.
My weekend began with some frustrating affairs: My plan for the trip to MLW (Mates Leather Weekend) involved NOT driving to P-town (to avoid the traffic getting there and back.) I was to take Boston Harbour Cruiseline's fast ferry (the same kind as they are planning to add to Rochester to Toronto next year, except this is passenger only.) After driving into Boston to the harbour and waiting almost forty-five minutes at the gate (the ferry was scheduled to leave at 2:00pm), I went to the main office for BHC and inquired where the boat was. The stunned receptionist told me that there was no 2 o'clock trip, it had been canceled.
What?!? As I pulled out my ticket that I purchased at their website to show to her. She took the ticket and got her supervisor. I was then told that the boat had broke down and was in dry dock. All persons that had tickets were called, except for those (as I) that purchased tickets with their website... Apparently they don't have access to the customer data from their own web site, so they had no telephone number to call! Well, I was fit to be tied. I did not want to drive there; that was the whole point! They were nice in the fact that they 1) refunded my $65, 2) gave me a free round-trip ticket for two on the fast ferry good for the next seven years, and 3) were going to pay for a bus trip to P-town, and provide return passage on the ferry as planned. The last part of this plan went south when we discovered, as it's off-season for visiting P-town, that there was only one bus, and it had left South Station at 1:00pm. I was left with no choice but to drive.
I fought the Friday afternoon traffic out of Boston down the I-93 heading for Rt. 3, which will take you to Cape Cod, and then to Rt. 6 which ends in Provincetown. I turned on the radio to listen to the traffic report to discover why I-93 was so packed, and heard that Rt. 3 had been closed due to an overturned cement truck and that 3-A (the alternative to Rt 3) was also backed up and hardly moving. My weekend was not off to a good start. To make a long story short, I took many, many back roads to get beyond the blockage and picked up Rt. 3 and continued to P-town unabated. The trip which normally takes about 2 hours took four ½ hours. I arrived at 6:30pm and checked into the bed & breakfast.
The rest of the weekend went well! Although chilly, the weather was nice. There were a lot of leathermen and bears from all over the east, including some friends I knew from New York City and Cleveland! Both evenings, I ate well! On Friday, I ate at The Mews and had a wonderful patê, filet mignon and all the trimmings! It was delicious! Saturday, I went back to the The Lobster Pot, where I had eaten before. As always, it was wonderful. The "First Mate Contest" was held outdoors under a huge tent, but it was so crowded that you could not see nor hear what was going on; so I just hung out by the fireplace in the bar with hundreds of other guys. Next year, they should consider using closed-circuit TV to broadcast the contest on all the screens in the bar! |
0.921835 | The total number of days between Sunday, June 10th, 1951 and Friday, January 16th, 1976 is 8,986 days.
This is equal to 24 years, 7 months, and 6 days.
This does not include the end date, so it's accurate if you're measuring your age in days, or the total days between the start and end date. But if you want the duration of an event that includes both the starting date and the ending date, then it would actually be 8,987 days.
If you're counting workdays or weekends, there are 6,419 weekdays and 2,567 weekend days.
If you include the end date of Jan 16, 1976 which is a Friday, then there would be 6,420 weekdays and 2,567 weekend days including both the starting Sunday and the ending Friday.
8,986 days is equal to 1,283 weeks and 5 days.
The total time span from 1951-06-10 to 1976-01-16 is 215,664 hours.
This is equivalent to 12,939,840 minutes.
You can also convert 8,986 days to 776,390,400 seconds.
June 10th, 1951 is a Sunday. It is the 161st day of the year, and in the 23rd week of the year (assuming each week starts on a Monday), or the 2nd quarter of the year. There are 30 days in this month. 1951 is not a leap year, so there are 365 days in this year. The short form for this date used in the United States is 6/10/1951, and almost everywhere else in the world it's 10/6/1951.
January 16th, 1976 is a Friday. It is the 16th day of the year, and in the 3rd week of the year (assuming each week starts on a Monday), or the 1st quarter of the year. There are 31 days in this month. 1976 is a leap year, so there are 366 days in this year. The short form for this date used in the United States is 1/16/1976, and almost everywhere else in the world it's 16/1/1976.
Enter two dates below to find the number of days between them. For best results, avoid entering years before 1753. Examples include 1905-07-24 or Nov 2, 1997. You can also type words like today or yesterday, or use the American format, 4/19/2019. |
0.939089 | what is the travel training program?
Objective: To provide the knowledge and practical skills that are needed to travel independently on public transit.
Who is Eligible: The Travel Training program is in a pilot phase. Its starting point is to assist people experiencing disability who want to learn to ride city transit. In the future it may expand to include: seniors, newcomers, etc.
There are many benefits that come from learning to ride city transit, including: increased freedom, confidence, independence and access to services. It also creates a greater sense of community and reduces costs!
*If you are a paratransit customer and learn how to use conventional transit, you will be able to continue to use paratransit when needed. |
0.939023 | Usage: Clean the face first, then apply this lotion 2-3 times a day.
Expiry date: 24 months. The specific date on the real object should be considered as final. Please follow the instruction on the product. |
0.999828 | Response to the article: “A Teacher’s View on Tenure”.
1. It was written by a teacher.
2. She is a department chairperson.
3. A teacher working in the Los Angeles USD (LAUSD).
4. It is a “different” discussion concerning teacher tenure.
I greatly appreciate that this article is written by a teacher; and that she does not rely on some of the old tired, and false clichés of “protecting academic freedom”. I won’t speak for all superintendents but in my experience (and in my own principal experience); I did not see principals rushing to get rid of teachers who were effective in the classroom; and thus aiding in the principal’s own “professional survival”! One of the great myths in public education is this idea of this drive on the part of principals of trying to get rid of good (effective) teachers. For sure, I have seen some principals who were ineffective in the role of being good school building instructional leaders; or some even not very good at being good building/staff/people managers. But these few people don’t make the case for the need for teacher tenure in the K-12 world. It would be much more productive, educationally (for the sake of the children), to professionally develop, or (if that fails) remove these ineffective principals. The response to a “bad” principal is not a bad regulation that would allow ineffective teachers to essentially keep a job for life, regardless of the quality of the professional product. Further a lengthy, expensive and distracting process of removal for reasons of incompetence, and in many cases criminality is also not in the best interest of a resource strapped public school system. It is important to note that the writer is a departmental chairperson. This fact goes a long way in understanding her thoughtful take on tenure as a teacher. (For those who may not know) As a departmental chairperson she has no doubt gained the respect of her peers, and administration for her effective classroom practice; why is this important? The discussion of tenure among educational professionals is almost always related to a: “personal situation” (Our stand on the topic is often based on where we stand professionally!). As a principal (and my entire professional life) I actually never thought about tenure; my focus was on the pursuit of professional excellence; “how well were the students doing?” was my primary question. Now, my life perhaps would have taken a very different professional, and monetary turn if I had been more of a “careerist”; but I make no apologies, and have no regrets, as to my singular focus on students. Further, the best (most effective) teachers I have supervised cared less about tenure, and more about their students succeeding academically. And the “best of the best” actually wanted to be challenged to improve their practice. Also important, effective teachers were not “impressed” with colleagues who were “poor performers” or “slackers!” The most effective educators I have encountered over the years; have in a sense, been driven by a set of core ethics; these ethical values would produce a great deal of discomfort on the part of that teacher, if students were failing, and they were just “collecting a check”. At the same time these individuals wanted to work in an environment where the leadership practices and evaluations, were driven by school building leaders with knowledge of teaching and learning standards; and who also evaluated them through a code of professional ethics. No professional should be judged by arbitrary, or unprincipled standards. I actually believe that the two principles of: “Do no harm to children”, and the good professional and ethical treatment of teachers can be balanced; but it would take a little more intellectual and political courage then has been displayed so far in this national conversation on teacher tenure. One of my primary critiques/concerns of the entire “teacher tenure” conversation is its limitations. This is due in large part to that fact that the conversation is being led to a large extent (on both sides) by people who have little, to no experience in the actual practice of school based teaching and leadership! Or, on the other hand by people and organizations who don’t have the educational interest of the children as a primary (not accidental) objective of the entire public education effort. This misguided conversation is also taking place in an atmosphere where the “solutions” to pedagogical problems are “oversimplified”, “sloganized”, and “dumb down”; or in the case of some printed journalistic efforts: insanely and provocatively sensationalized. And it is this: “one act solves all educational problems” approach that is responsible for so much harm, and misdirected behaviors presently at work in the profession. This confusion further contributes to a primary reason that public education continues to fail, and in particular, fail children and communities of color. Unfortunately, these same theoretically deficient “ magic-bullet solutionneers”, just take turns, by ascending and descending into, and out of power; and so we never get at the fundamental problems that cause so many children to not have a positive public educational experience.
Some very well-meaning folks have invested time-effort and money in legal challenges to state teacher tenure laws (thus the LAUSD point). I agree with the fundamental idea that they put forward, that “tenure laws” are inconsistent with creating learning institutions that affirm the child’s right to be safe, and to effectively learn; above the right of an individual adult to be employed in that particular job category. Now this is not such a strange or radical idea that people make it out to be. Every day our “rights” are evaluated in relationship to the greater “community good”; and the rights of others in the community. We have a right to own a car, but not a right to drive without a license, insurance, recklessly or drunk; “free speech” (as we are often forced to explain to students); does not mean any speech, in any place, at any volume, and at any time. “Rights” don’t exist in isolation, they are not absolute, there are “standards”, and a “hierarchy of rights” that must be taken into consideration. Society has even gone as far (and rightfully so) to say that your right to be a parent, is not greater than your child’s right to not be abused, to not be put in physical or psychological danger, or to not be educated. And so I agree with the fundamental premise of the “no tenure” folks. My concern is that too much is being promised with the removal of teacher tenure; promises that in reality can’t be realized by this single act. Even if a school district was able to remove 15% of its lowest performing teachers (assuming we created a sound standard-rubric based system to determine such people); we would still need to effectively and efficiently improve the practice of the remaining 85%. We would need to make sure that the 85% “good practitioners” had the materials and supplies they need to do their work effectively. We would need to come up with a plan to retain these effective teachers; and also come up with a plan to recruit replacements for the 15% that were removed (at the same time that a percentage of the 85% are also retiring each year). And particularly in middle and high schools, we would need to make sure that these 85% effective teachers could work in a safe, “teaching-efficient” and productive school learning environment. This would mean adding all of the much-needed out of classroom student support systems and resources (i.e. counseling, informal education programs, academic support, etc.) to complement all of this “good teaching”. Finally, to fully support the 85% we would need to develop a strong crop of effective school building leaders (SBL). These leaders would need to have a deep knowledge of how to build a strong, positive, safe and productive school culture, knowledge of pedagogical theories, strongly literate in content standards and teaching methodology; and be fluent in theories of personnel development. These SBLs would need to function as the “Chief Instructional Coach”, and primary professional developers in the school building. My hypothesis (based on experience), is that a school can survive a lot of things; but it is almost impossible for a school to survive, and thrive under poor and ineffective leadership. A bad SBL could even undermine the best work, of the best teachers!
Further, the terribly misguided and destructive approach of many of the “anti-tenure” crowd, to utilize standardized assessments to “weed out” bad teachers is just plain bad pedagogy. The role of standardized assessments is to serve as a diagnostic tool to better serve the needs of children; while at the time improve and sharpen teaching methodology. This means that the teacher evaluation process should be focused on discovering and developing talent, not “catching folks”. Therefore the “evaluation process” should be in three parts: standardized assessments, the formal and informal observations of teachers, (guess what) teaching; and the assessment of student work (product). Utilizing standardized assessments, and classroom observations as punitive tools; is to try to fix a political problem (the absence of the will on the part of politicians to declare children a “protected class”) with tools designed for student diagnostic/development purposes, and a teacher professional development tool. In education, when we use good tools for the wrong reasons; bad things happen to children and adults.
The action needed must be different as the writer points out; but I would go a step further and say the plan must be “upsetting” and “disturbing” to the present state of affairs. “Nibbling” on the edges of the problem won’t get it. Things are not going well; and if so many children are not successful; then how can the professionals who serve them call themselves successful; or in our language: Proficient? The “teacher tenure” question, in my view is a professional ethics question (and the way it should be solved). If as professionals we declare that public schools are “High Reliability Organizations” (HROs); or as I also like to call them High Risk Organizations; an idea from the book: Managing the Unexpected, Sutcliffe & Weick. Although very diverse in their missions and structures; HRO’s all seem to have some fundamental organizing principles in common. In short HROs are those organization where a “failure” can lead to serious injury, great societal harm, or death; examples of these type of organizations are: Air craft carriers, Nuclear power plants, hospitals and fire departments. In these types of organizations incompetence could equal death, to either the professionals involved, or the public in general. Therefore, they necessarily recruit based on the criteria of “high competence”; and they also have an extremely low tolerance for incompetence. It is an important first step that these organizations recruit the best, and most skilled for the job description. Sincere outside stake-holders who are unpracticed, untrained and lacking in the technical professional knowledge; don’t make the personnel decisions for HRO’s. Second, there is some type of extensive internship program where after earning certification, the practitioner must further train under the watchful eye of an effective “master” veteran in the field (they never have their practitioners leave “basic training”, and then go directly into solo practice). Third, there is a great deal of constant attention to the upgrading, (and yes evaluation) and professional development of skills and competency. Fourth, since the emphasis is on competency; tenure (or right to a position) can’t be used to compromise the mission, the successful operation, the safety of the organization, the success and safety of the team, or the safety and well-being of the public it seeks to serve. Finally, these HRO’s all seem to have very powerful systems of review, monitoring, and “operational redundancy” (someone is always checking on the “checker”, so that nothing is missed); and have strong “self-evaluative-correcting procedures in place; (i.e. they honestly deal with the: “What went wrong question”. Now some of us believe that public schools fit the definitions of HROs. And if proof is needed, all one need do is to visit any state prison (and imprisoning a lot of people, is actually something we do pretty efficiently as a nation); and there you will encounter the casualties (prisoners) of a public education system that failed to capture the curiosity, talent and the natural inclination to learn that these prisoners had at a young age. There is a terrible societal price we choose to pay for so many poor educational outcomes. In an interesting and tragic way we have transformed “school failure”, into some very robust, financially rich and vibrant criminal justice-social service systems. However, I think that there is a more humane, positive and a safer path to economic development besides depending on school failure.
And that is why the answer to the “tenure” question will ultimately reflect our thoughts on the ethics of professionalism, and our responsibly to the nation’s future. If public schools were declared, and designated HROs, the tenure debate would be brought into a self-determining/defining focus. A hierarchy of “rights” will be established that in all situations favor the safety and educational well-being of children. The school system would then shift from a primary mission as a “business”, and an employer of adults; to a “prime directive” to do everything within its power to do no harm to children; which means to educate them as if we all saw every school child, as our own child; as if we need every child to cure, care and comfort us as we pass on the responsibility for the planet to them. As if our future well-being as a nation depended on them; and you know something; it really does! |
0.99999 | How do you keep your dog feeling good?
Regular exercise helps prevent obesity and health risks associated with aging. It also expends energy which can help your dog cope with separation anxiety and misbehavior in your absence (if he’s tired, he’ll nap instead of eating your couch cushions). |
0.943119 | Saint Peter (Latin: Petrus, Greek: Πέτρος Petros, Syriac/Aramaic: ܫܸܡܥܘܿܢ ܟܹ݁ܐܦ݂ܵܐ, Shemayon Keppa, Hebrew: שמעון בר יונה Shim'on Bar Yona; died c. 64 AD), also known as Simon Peter, Simeon, or Simōn, according to the New Testament, was one of the Twelve Apostles of Jesus Christ, leaders of the early Christian Church. The Roman Catholic Church considers him to be the first Pope, ordained by Jesus in the "Rock of My Church" dialogue in Matthew 16:18. The ancient Christian churches all venerate Peter as a major saint and associate him with founding the Church of Antioch and later the Church in Rome, but differ about the authority of his various successors in present-day Christianity.
The New Testament indicates that Peter was the son of John (or Jonah or Jona) and was from the village of Bethsaida in the province of Galilee or Gaulanitis. His brother Andrew was also an apostle. According to New Testament accounts, Peter was one of twelve apostles chosen by Jesus from his first disciples. Originally a fisherman, he played a leadership role and was with Jesus during events witnessed by only a few apostles, such as the Transfiguration. According to the gospels, Peter confessed Jesus as the Messiah, was part of Jesus's inner circle, thrice denied Jesus, and preached on the day of Pentecost.
According to Christian tradition, Peter was crucified in Rome under Emperor Nero Augustus Caesar. It is traditionally held that he was crucified upside down at his own request, since he saw himself unworthy to be crucified in the same way as Jesus. Tradition holds that he was crucified at the site of the Clementine Chapel. His mortal remains are said to be those contained in the underground Confessio of St. Peter's Basilica, where Pope Paul VI announced in 1968 the excavated discovery of a first-century Roman cemetery. Every June 29 since 1736, a statue of Saint Peter in St. Peter's Basilica is adorned with papal tiara, ring of the fisherman, and papal vestments, as part of the celebration of the Feast of Saints Peter and Paul. According to Catholic doctrine, the direct papal successor to Saint Peter is Pope Francis.
Two general epistles in the New Testament are ascribed to Peter; however, some biblical scholars reject the Petrine authorship of both. The Gospel of Mark was traditionally thought to show the influence of Peter's preaching and eyewitness memories. Several other books bearing his name – the Acts of Peter, Gospel of Peter, Preaching of Peter, Apocalypse of Peter, and Judgment of Peter – are considered by Christian churches as apocryphal.
Guardian of the Catholic Church; Kiev, Guardian of Vatican City; protector of the Jewish people,police officers, military, grocers, mariners, paratroopers, sickness. |
0.9976 | This is not a pure Italian translation issue, it looks, after investigation and discussion with Italian native speakers, a question more related of how professions are recorded in Calabria (or whole Italy?).
The question is related to a record from the 19th century located in Calabria. I do not know if the registers are in Italian or are in Calabrese (notice during that period Italian unification happened, so each country was having a different language/dialect).
I have noticed that several registers include Italian adjectives as professions, more concretely the professions "civile" and "legale" (which translates to civil and legal in English).
What do "civile" and "legale" mean exactly in English as professions?
How are professions recorded in Italian (Calabrian) records?
In 19th Century Italian documents, I've seen avvocato for lawyer, as well as legista. While legale translates as law, like you I've been unable to find a conclusive answer. However, if not lawyer as we understand the term today, it shows legal knowledge, perhaps one who gives legal counsel and, thus, a person in a higher position in society, as does civile. Civile translates as civilized, a middle class citizen, someone who was well off (and perhaps a landowner although other terms were used, such as possidente, one who owns, and proprietario, literally owner). I lack insight into Calabrian records, as my research has been confined to Campania.
In the archives of the Mezzogiorno we find civile used in different manners - as noted here already, at times we find it used more liberally to include the middle class but quite often it in fact denotes a more elevated class.
I have controlled records where nobles were listed as "civile" and even more often as "legale" which at times was used to denote an actual legal class that thanks to a real dispaccio di ferdinand iv, gave families that held that class for 3 generations and did not enter a more base line profession or marry a lower class family, they would potentially meet requirements to apply to become recognized as nobles (the third class of nobility, first being ancient feudal families - nobilta generosa, second class being the families that gained nobility from service to the crown - nobilta di Privilego, and the third being the families that are often listed as "legale""
1.3.) PRIMO CETO "FAMIGLIE NOBILI" - TERZA CATEGORIA "NOBILTA' LEGALE O CIVILE": comprenda quelli, [i]i quali facciano costare avere così essi (*), che il loro Padre, ed Avo vissuto in Città demaniale, e regia, escluse le baronali, sempre civilmente con decoro, e comodità, senza esercitare carica, e impiego basso, e popolare, e sono sempre stati riputati dal Pubblico Uomini onorati, e dabbene. Quella della terza equivalga alla seconda, e comprenda anche i Negozianti di Cambio, o sia di Ragione, i di cui Padre, ed Avo abbiano esercitato lo stesso impiego, e non altro d’inferior condizione. Con i Figli delli Ufiziali Subalterni si abilitano ancora quelli degli Uditori di Provincia, e di Governatori Regj: i primi all’età di 16 anni, i secondi in quella di anni 18. E finalmente i Figli de’ Mercanti di lana, e di seta, de’ quali il Padre, ed Avo abbiano fatto ugual negozio, possian essere aggraziati a servire da Cadetti solamente nell’età di anni 18.
For some further classification of possidente and propretario - in many cases we find possidente to mean someone not working and living off of a pension of some business or inheritance, whereas propretario often suggests the owner of the land/business is more directly involved in the management/trading of the operation. A possidente may have had someone to do this. Granted both are used interchangeably and alone don't qualify any family into any sort of class.
Not the answer you're looking for? Browse other questions tagged 19th-century occupation italy translation or ask your own question.
Did son-in-law have a different meaning in mid-19th century England?
How to find 19th Century birth certificate from Novara, Italy?
Were Liquor Licenses required in early 19th Century Cornwall?
Rule of Thumb for when Anglo-American surnames became standardized?
Where to find Brazil immigration records of ancestor from Italy?
Which archive holds records for Swiss Guard serving in Naples, Italy or Vatican?
Was there a Morascini (Morrasini) on Italian naval roster anywhere during 19th century? |
0.999169 | This lecture presents some examples of Hypothesis testing, focusing on tests of hypothesis about the variance, that is, on using a sample to perform tests of hypothesis about the variance of an unknown distribution.
In this example we make the same assumptions we made in the example of set estimation of the variance entitled Normal IID samples - Known mean. The reader is strongly advised to read that example before reading this one.
The sample is made of independent draws from a normal distribution having known mean and unknown variance . Specifically, we observe realizations , ..., of independent random variables , ..., , all having a normal distribution with known mean and unknown variance . The sample is the -dimensional vector , which is a realization of the random vector .
The test statistic is This test statistic is often called Chi-square statistic (also written as -statistic) and a test of hypothesis based on this statistic is called Chi-square test (also written as -test).
Let and . We reject the null hypothesis if or if . In other words, the critical region is Thus, the critical values of the test are and .
The power function of the test is where is a Chi-square random variable with degrees of freedom and the notation is used to indicate the fact that the probability of rejecting the null hypothesis is computed under the hypothesis that the true variance is equal to .
The power function can be written as where we have defined As demonstrated in the lecture entitled Point estimation of the variance, the estimator has a Gamma distribution with parameters and , given the assumptions on the sample we made above. Multiplying a Gamma random variable with parameters and by one obtains a Chi-square random variable with degrees of freedom. Therefore, the variable has a Chi-square distribution with degrees of freedom.
When evaluated at the point , the power function is equal to the probability of committing a Type I error, i.e., the probability of rejecting the null hypothesis when the null hypothesis is true. This probability is called the size of the test and it is equal to where is a Chi-square random variable with degrees of freedom (this is trivially obtained by substituting with in the formula for the power function found above).
This example is similar to the previous one. The only difference is that we now relax the assumption that the mean of the distribution is known.
The power function of the test is where the notation is used to indicate the fact that the probability of rejecting the null hypothesis is computed under the hypothesis that the true variance is equal to and has a Chi-square distribution with degrees of freedom.
The power function can be written as where we have defined Given the assumptions on the sample we made above, the unadjusted sample variance has a Gamma distribution with parameters and (see Point estimation of the variance), so that the random variable has a Chi-square distribution with degrees of freedom.
The size of the test is equal to where has a Chi-square distribution with degrees of freedom (this is trivially obtained by substituting with in the formula for the power function found above).
Denote by the distribution function of a Chi-square random variable with degrees of freedom. Suppose you observe independent realizations of a normal random variable. What is the probability, expressed in terms of , that you will commit a Type I error if you run a Chi-square test of the null hypothesis that the variance is equal to , based on the observed realizations, and choosing and as the critical values?
Make the same assumptions of the previous exercise and denote by the inverse of . Change the critical value in such a way that the size of the test becomes exactly equal to .
Make the same assumptions of Exercise 1 above. If the unadjusted sample variance is equal to 0.9, is the null hypothesis rejected?
In order to carry out the test, we need to compute the test statistic where is the sample size, is the value of the variance under the null hypothesis, and is the unadjusted sample variance.
Thus, the value of the test statistic is Since and , we have that In other words, the test statistic does not exceed the critical values of the test. As a consequence, the null hypothesis is not rejected. |
0.955841 | F.J. Blaauw, R. Overbeek, T. Albers, J. Vlek, M. Maessen, J. Gooijer, E. Lazovik, F. Arbab, A. Lazovik.
Modern data analysis platforms all too often rely on the fact that the application and underlying data flow are static. That is, such platforms generally do not implement the capabilities to update individual components of running pipelines without restarting the pipeline, and they rely on data sources to remain unchanged while they are being used. However, in reality these assumptions do not hold: data scientists come up with new methods to analyze data all the time, and data sources are almost by definition dynamic. Companies performing data science analyses either need to accept the fact that their pipeline goes down during an update, or they should run a duplicate setup of their often costly infrastructure that continues the pipeline operations.
In this research we present the Evolutionary Changes in Data Analysis (ECiDA) platform, with which we show how evolution and data science can go hand in hand. ECiDA aims to bridge the gap that is present between engineers that build large scale computation platforms on the one hand, and data scientists that perform analyses on large quantities of data on the other, while making change a first-class citizen. ECiDA allows data scientists to build their data science pipelines on scalable infrastructures, and make changes to them while they remain up and running. Such changes can range from parameter changes in individual pipeline components to general changes in network topology. Changes may also be initiated by an ECiDA pipeline itself as part of a diagnostic response: for instance, it may dynamically replace a data source that has become unavailable with one that is available. To make sure the platform remains in a consistent state while performing these updates, ECiDA uses a set of automatic formal verification methods, such as constraint programming and AI planning, to transparently check the validity of updates and prevent undesired behavior.
In earlier work, we showed that an initial implementation of ECiDA on top of the Apache Spark ecosystem performed well and introduced an acceptable amount of overhead to the data pipeline [@Lazovik2016; @Albers2018]. The platform is built in collaboration with a large, Dutch water company and is developed with their use cases in mind. ECiDA will, for example, be used to (i) improve water distribution monitoring and automation, (ii) enable the prediction of water quality, and (iii) determine structural reliability of pipes in order to perform predictive maintenance. These use cases emphasize different aspects and a variety of issues that might arise in a practical setting, and ensure ECiDA is built as a generic data science solution, which should therefore be applicable to any data science project. |
0.993965 | Failure to change often leads to failure in the process. Even though the tried and true offers peace of mind and avoids problems, when it comes to CRM, it is crucial that switches or updates be made from time to time. There are various reasons this may be considered important, among them the need to keep up with the firm's growth or to improve the present CRM system.
Following are some suggestions to help make the transition between CRM systems a bit easier.
1. Ask questions to clarify the current situation and forecast the new one. Take the time to evaluate the current situation and see how it is working. Think about what aspects may need to be changed, and how they will help to improve the system afterward. Set goals that the new system will have to meet, and try to figure out how much it will cost and what the various implications may be. You may find that it is better to discuss the matter with your current service provider, as it may be more feasible to retain the same one and make amendments to your current system than to start from scratch.
2. Study the gaps and establish what you need. Make sure to evaluate carefully the gaps between what your current system provides and what the new one should incorporate. This will help you set out the new objectives and features. Take the time to consider the people or departments in your firm that such changes will impact. Set up meetings to address these issues with them.
3. Work on user acceptance. Change may be regarded as difficult and frightening. You need to ensure that system users will be helped to accept the transition and see the benefits that will arise from it. Resistance to change is common, but you need to learn to tackle it properly. Find some people who are open to change, and use their enthusiasm to affect others positively. They could also help you outsource potential CRM providers and will play a critical role in the endorsement of the new system's potential.
4. Offer step-by-step instruction. Users will have gotten used to the present CRM system, and they will have developed their own habits while using it. Keep this in mind when customizing the new system to make the transition process a bit easier for them to get used to.
Recent technologies help employees get used to new systems by breaking tasks down into short, step-by-step actions. These help employees avoid errors through even the most complex processes. This removes the barriers of entry for employees using other CRM systems by easing employees' agony over changes in their daily routine, and decreases the learning curve.
5. Test and focus. For a successful CRM system transition, communication is imperative. Besides testing the new system, you should try to focus on the actual changes, and how they have been introduced within the new system.
At face value, transitioning to a new or updated CRM system may seem like a technical and highly complicated process. But if you focus on the people and are prepared to face their reactions and expectations properly, you will be able to make the transition process an absolute success.
Omri Erel is the marketing director at WalkMe, an interactive Web site guidance system. He is also the lead author and editor of the blog Saas Addict.
Adhere to a plan and avoid unnecessary changes.
Customers request project management capabilities in CRM.
The new release introduces sales and marketing automation features for small businesses.
Use the right tools to boost brand loyalty. |
0.992479 | If you’re looking for a cheap TV 42 inches or larger, there are a lot of advantages to buying a plasma instead of an LCD.
The biggest is picture quality: an LCD is typically not able to reproduce the black levels and contrast of an equivalently priced plasma, and plasma always trounces LCD for viewing angle and uniformity. Entry-level 720p plasmas are also more energy-efficient than more expensive 1080p plasmas, and while they use a lot more power than LCDs, they still only cost about $20 to $30 per year to run. |
0.943377 | The Wiener index of a molecular graph, which is defined as the sum of distances between all pairs of vertices of the graph, is a distance-based graph invariants used as a structure-descriptor for predicting physicochemical properties of organic compounds. The Wiener index of polyhex nanotorus is computed by Yousefi and Ashrafi (An exact expression for the Wiener index of a polyhex nanotorus, MATCH Commun. Math. Comput. Chem. 56, 169 (2006)). In this paper we introduce a new method, based on a mathematical model given by Cotfas (An alternate mathematical model for single-wall carbon nanotubes, J. Geom. Phys. 55, 123 (2005)). to compute the Winer index of polyhex nanotorus. |
0.99978 | The Math Section consists of arithmetic, algebra, geometry, and data analysis. To attain a good score on the GRE math section, you need to be familiar with the types of questions and math concepts that appear on the exam. There are four types of math question: quantitative comparison, single-answer multiple choice, multiple choice with one or more answers, and numeric entry.
Arithmatic: integers, arithmetic operations, exponents & roots; & concepts such as estimation, percent, ratio, rate, absolute value, the number line, decimal representation & sequences of numbers.
Algebra: Operations with exponents; factoring & simplifying algebraic expressions; relations, functions, equations and inequalities; solving linear and quadratic equations & inequalities; solving simultaneous equations & inequalities; & coordinate geometry.
Geometry: Parallel & perpendicular lines, circles, triangles, quadrilaterals, other polygons, congruent & similar figures, three-dimensional figures, area, & perimeter, volume, the Pythagorean Theorem & angle measurement in degrees.
Data Analytics basic descriptive statistics, such as mean, median, mode, range, standard deviation, interquartile range, quartiles & percentiles; interpretation of data in tables and graphs, elementary probability, permutations & Venn diagrams. |
0.963179 | Will you try to open that mysterious door, or would you rather see how far you can throw a speaker set out the window?
Human: Fall Flat is most certainly a puzzle platformer. There are lots of puzzles, and lots of platforms to jump between, over, and shimmy around. It doesn’t, however, control like any puzzle platform game I’ve ever encountered. Forget the precision of the recently released Super Mario Odyssey, this takes you into the opposite direction of fighting the controls for minimal gains. What seemed at first to be a clumsy, stumbling, drunken buffoon of a character, slowly develops a skill set that requires practice, patience and most certainly persistence. You see, the awkward movement of your character is one of the challenges to overcome. Deliberately holding you back from seemingly easy everyday tasks in order for you to feel truly accomplished at completing a difficult section of a level, and to revel in the hilarity that often occurs at simple lapses of concentration.
Controlling Bob is not easy. The Left Analogue Stick moves him around the 3D environment in a shambling, half-drunken zombie gait. Slow and steady is certainly Bob’s thing. The Right Analogue Stick is used for camera control, and vertical orientation can be inverted. The camera is responsive, never gets stuck on scenery, but has a preponderance to zoom through your character when in enclosed spaces or near a wall, making it hard to know what’s going on, or what you’re doing. Due to the requirement of 2 Analogue Sticks, when playing in 2-player mode the option of single Joy-Con use is not available. Full controller set-ups must be used. Bob jumps with a press of A, a rather short and low jump, though, and plays dead, collapsing on the spot, with a press of X, not useful, but always hilarious at the right/wrong time.
His true manipulation of the environment comes from the use of his hands. Pressing and holding ZL extends Bob’s left arm, with ZR doing the same for his right. The end of each arm houses a sticky hand. Touch anything and Bob grabs hold of it until you release the ZR or ZL button. By doing this and moving the camera (known as Bob’s head) around you’ll see that control of the height and stretch of his arms are possible. Look up and Bob raises his arms, look down and he bends over and stretches to retrieve something from the floor. One of the biggest challenges to overcome, therefore, is distance. Judging the correct distance between you, your hard to control appendages, and the object that is required to be interacted with is fiddly at best.
As you progress through the game a lot of well timed jumping is required, and you may ask, how is this possible with a deliberately awkward set-up meant to confound the player. Well to jump to anything just above head height, first raise both your arms by holding the camera to look up and pressing ZL and ZR, then jump into the platform, with a short run and press of A. The sticky hands will catch hold of the ledge, don’t let go now. Next, tilt your head to look down, thereby moving your hands down and pulling your body up. At just the right time, when the ‘weight’ of your body is over the lip of the platform, let go with your hands (ZL and ZR) and push forward on to walk (Left Analogue Stick) and get up to safety. Seems complicated for a little jump, well yes, and that’s the point of the whole game. The first time you perform the move, with so many button presses, holding this, letting go there, it all seems confusing and you’ll mess up and drop. What Human: Fall Flat does well is giving you plenty of practice to master that skill before moving you onto the next. The jumping mechanic is tutorialised by making your way up a mountain. One mistake will probably mean falling all the way to the bottom and restarting another attempt at the climb. Perseverance is an absolute requisite for the player of Human: Fall Flat, and with it such actions become second nature to pull off. Just as you’ve become familiar with this moveset, Human: Fall Flat moves onto teaching you another skill. And there are many, from climbing, to shimmying, to lever pulling, button pressing, rope swinging, hooking/unhooking, death sliding, throwing, momentum flinging, floating, rowing, barrel rolling, catapult loading and firing, unlocking various door styles, building objects that will suffice to allow progress, etc. And the environment of each level is such that with clever thinking, or just dumb luck, there’s always more than one way to skin a cat.
The levels within Human: Fall Flat are large open areas allowing you to explore at your leisure what it is that is required of you to make progress to the next section. Each level presents itself as a playground for the player to focus on progress or as an experimentation laboratory. I could move the wrecking ball to knock down a wall allowing access to the next interlinked area of the level, or I could jump into a large refuse bin and get my friend to push it off the side of the tall half demolished 5 storey building. And that’s where Human: Fall Flat excels, when 2-players are enjoying their time together. Playing alone can feel ostensibly like playing a regular puzzle platform game. Add the second player and the ability to help each other out, scupper the other person’s best laid plans, or just piss-around and laugh at being catapulted across a river for no other reason than it’s fun, is where Human: Fall Flat’s longevity lies. There were also plenty of instances where I felt I had missed a chunk of a level because I’d found my own way around, in a way that was not intended. And this lends the game an air of replayability to check out areas that you may somehow have skipped through. Places, such as Mansions, Mountains, Demolition Sites, Castles, and Waterways can all be traversed at your leisure or with more of a goal based focus.
Human: Fall Flat has a solid feel to its low poly-visuals. Textures are flat colours, but the use of shadow gives everything a depth of solidity required to feel part of the landscape. Draw distance is as far as the eye can see, for the most part, only fogging in the details at the most far removed structures. Yes, visually simplistic, but Human: Fall Flat excels with its physics. Everything behaves as you would imagine it to in the real world. Place a cylinder on the floor, with a long plank over it, and you have a seesaw. Get player 2 to drop an anvil onto the other end, and propelling hilarity never ceases to amuse. Ropes can be use to cross crevasses by swinging in a coordinated fashion, individual bricks of a wall fall down after being smashed by a huge wrecking ball, larger heavier objects are more difficult to move than their lighter, smaller counterparts. The only thing that seems wonky in this world is your interface of interaction with it, Bob.
That’s not to say everthing is perfect, far from it. Human: Fall Flat suffers from performance issues. Whether docked or undocked, one or two-players, the performance issues were all too present. The most vivid were the freeze frames. The game would freeze for over a second at a time before resuming its course. The game also has a dropped frame rate that causes stuttering in places where the areas tend to be larger. It’s far from game breaking, but it is very evident and jarring, and occurs regularly enough to be off-putting in some of the trickier sections. These performance issues present themselves in all the varieties of the Switch’s play-styles, docked or undocked. However, the general gameplay is so slow that the stuttering frame rate doesn’t really get in the way. It resolves itself quickly, within a few seconds, and normal play is resumed, until the next stuttering fit or freeze.
The music is lovely. Orchestral violin and clarinet concertos reinforce the dream-like visual scope of Bob’s dreams. Sound effects are mostly thudding, bumping, clanking noises of Bob’s misuse of the environmental objects at hand, and adds to the level of immersion and hilarity, slap-stick style. Nothing like turning around with an oar and ‘accidently’ clouting player 2 in the back of the head, with the appropriate comedy thud, to propel them over the mountain, that they’ll have to climb back up again. Which reminds of the save states. If only one of the 2 players makes it to a checkpoint, then upon death both players can spawn at this new location. It makes the 2-player game a little easier to navigate than a solo attempt. Why build a shipping container staircase, when you can just get player 2 to jump onto the dock-crane and hoist them over, then jump into the salty-brine and spawn at the newly created marker.
Players can customise their character from the Main Menu with skin colour changes, and different items of clothes for the three main body parts. All clothes can be colour changed to suit your dress sense, and clothes from police uniforms and casuals, to construction workers and soldiers can be chosen. Or, to the hell with that and just walk around naked.
The game has fluctuated between making me, at first, think that I’d hate this game because of the broken mechanics, to loving it, for the very same reason, to me screaming in frustration as an oar gets caught on some scenery and I pushed back into the water and a drown based death, for the 15th time. Replaying difficult sections is a requirement in this game.
So Human: Fall Flat excels at its attempt at a physics based puzzle platformer, and this is especially true when playing in local co-op mode with another player of a like mind. The game struggles to maintain the player’s interest in a solo play through once the main progression/story is completed. Some of the puzzles are truly great, whilst others can be approached and solved from a variety of angles, others, yet still, require a linear puzzle solution that can be tediously repetitive or, at worst, downright frustrating. A great game held slightly back by frustrating elements and a stuttering framerate no matter how you play it.
Great puzzle platformer with 2 players present to test the physics engine to breaking point in hilarious hi-jinks, struggles to maintain interest beyond the main progression in one-player mode, and is plagued by non-game breaking frame freezes and stuttering throughout. |
0.957307 | Artificial intelligence cyberattacks are coming -- but what does that mean?
This doesn’t mean robots will be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts -- things like identity theft, denial-of-service attacks and password cracking -- more powerful and more efficient. This is dangerous enough -- this type of hacking can steal money, cause emotional harm and even injure or kill people. Larger attacks can cut power to hundreds of thousands of people, shut down hospitals and even affect national security.
As a scholar who has studied AI decision-making, I can tell you that interpreting human actions is still difficult for AI and that humans don’t really trust AI systems to make major decisions. So, unlike in the movies, the capabilities AI could bring to cyberattacks -- and cyberdefense -- are not likely to immediately involve computers choosing targets and attacking them on their own. People will still have to create attack AI systems and launch them at particular targets. But nevertheless, adding AI to today’s cybercrime and cybersecurity world will escalate what is already a rapidly changing arms race between attackers and defenders.
Beyond computers’ lack of need for food and sleep -- needs that limit human hackers’ efforts, even when they work in teams -- automation can make complex attacks much faster and more effective.
AI, however, could help human cybercriminals customize attacks. Spearphishing attacks, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time-- if detected at all -- due to their more limited impact.
AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may be able to exploit another vulnerability or start scanning for new ways into the system – without waiting for human instructions.
This could mean that human responders and defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a programming and technological arms race, with defenders developing AI assistants to identify and protect against attacks -- or perhaps even AI with retaliatory attack capabilities.
Operating autonomously could lead AI systems to attack a system it shouldn’t or cause unexpected damage. For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for unmanned aerial vehicles to operate autonomously has raised similar questions of the need for humans to make the decisions about targets.
Jeremy Straub is an assistant professor of computer science at North Dakota State University and associate director of the NDSU Institute for Cyber Security Education and Research. |
0.999627 | Alter the chapter of the Yom Kippour/Ramadan War was closed by the cease-fire on the Golan front, in April 1974, there followed a lengthy lull in the seemingly constant war between Israel and Syria. That was until 1979-1980, when a new series of skirmishes developed, as the SyAAF tried to interfere with frequent Israeli recce and bombing missions against Palestine Liberation Organisation (PLO) positions in Lebanon. The SyAAF was relatively slow to introduce the MiG-23MS during the fighting over Lebanon, choosing instead to dispatch MiG-21s. However, this changed as their losses started to mount, and soon the Syrian Ground Controlled Interception (GCI was searching for a suitable target for which it could re-introduce the MiG-23MS to combat.
The first such event occurred on the afternoon of April 26, 1981, when an Israeli formation bombed the PLO position), in the southern Lebanese city of Sidon. Two MiG-23MSs on low orbit over northern Lebanon, were vectored to intercept, and successfully shot down two Douglas A-4 Skyhawks. As a result of this and several clashes, the situation over Lebanon became particularly tense, but for the time being, the Israelis were busy preparing their operation against the Iraqi nuclear reactor plant in Tuweitha, which were flown in June 1981. The next opportunity for the Syrian MiG-23MS to engage Israeli fighters came after the Israelis invaded southern Lebanon, with Operation PEACE FOR GALILEA, initiated on June 6, 1982.
Initially, the Israelis tried to avoid engaging with Syrians - the IDF/AF concentrating on supporting ground troops on their drive towards Beirut. However, the SyAAF was clearly not going to sit still and let Israeli armoured formations threaten to outflank Syrian positions in the Bekaa Valley, or allow IDF/AF reconnaissance operations to get a clear picture of the Syrian SAM positions. Very soon after the Israeli operations began, the first Syrian interceptors appeared above Lebanon. While MlG-23MFs had successfully shot down an Israeli BQM-34 recce drone and evaded a section of four Israeli McDonnell Douglas F-15 Eagles which fired numerous Sparrow air-to-air missiles against them on June 6, and also claimed to have shot down a General Dynamics F-16 Fighting Falcon on the following day, the MiG-23MS interceptors were kept back and did not initially take part in any fighting.
Three days into the Israeli invasion of Lebanon, the situation changed completely, as a clash with Syrian troop deployed in the Bekaa Valley and around Beirut became unavoidable. In order to establish air superiority over the battlefield, on the afternoon of June 9, 1982, starting at 14:14, the IDF/AF executed the well-known operation against SAM sites in eastern Lebanon, deploying 26 F-4Es to attack Syrian radars with AGM-78 Standard ARM/Purple Fist and AGM-45 Shrike anti-radar missiles. Nineteen radar sites were claimed as destroyed or neutralised in the first wave.
In the following battles, caused by the appearance of the second Israeli wave, including 92 A-4 Skyhawks, F-4E Phantoms, and IAI Kfirs, escorted by F-15s and F-16s, both the Syrian SAM stations and no less than 54 Syrian MiG-21 and MiG-23 interceptors sent to stop them were left 'blind'. Without their radars inside Lebanon, the Syrians were compelled to guide their fighters using long-range systems positioned inside Syria, but hindered by the mountain ridges in between. Even these were jammed by the Israelis, just like the communications between Syrian pilots and their GCI-stations, while-guided by Grumman E-2C Hawkeyes - Israeli interceptors waited in ambush at low level between Lebanese hills. In the ensuing battle, several Syrian MiG-21 squadrons were mauled. Syrian MiG-23MS pilots played only a secondary role, and claimed only one Israeli F-4E Phantom as shot down by R-3S missiles fired by two Floggers, while two of them were also shot down, with the loss of one pilot, Lt Sofi. In contrast, Syrian MiG-23MF pilots claimed three kills, for three losses, with all pilots ejecting safely. "...THE AIR-TO-AIR BATTLES FOUGHT OVER LEBANON WERE SOME OF THE LARGEST EVER INVOLVING JET FIGHTERS..."
We were continuously pushed into pursuing the enemy by the ground control, although we were not in the best situations. The enemy used this to advantage and set up numerous ambushes where some fighters would drag us into the shooting zone of the others. When closed to 10-15 km to the enemy, our radars would go black and we would lose all means of detecting them. Heavy jamming wasn't concentrated on our radars alone, but also on our communications with ground control.
Still there where ways to trick that situation. One was for many formations to ingree simultaneously, or in waves one closely behind the other. This way the later waves would still have the abilitu to use their radar and fire at the enemy while they were busy engaging the first wave. This tactic, however, proved very expensive, and always led to losses on our side.
Many of our pilots were not experienced; they always obeyed any order by the GCI, and this led many of them to death. I followed the advice from an older pilot not to always do what i'm told to do, and this saved me. I used a tactic which depended on making the enemy angry. I followed the advice from an older pilot not to always do what I'm told to do, and this saved me. I used a tactic which depended on making the enemy angry. I would close at high speed, but before entering the range of their Sparrows, I'd turn away and then do that again and again, until they would start to fire their missiles even outside the maximum range. I once evaded four Sparrows this way. only then would i try to close into the range of my missiles, usually causing them to turn away and try to avoid. That way my mission was done and my bombers were safe to attack.
THE SECOND TIME AN EAGLE GOT HIT AND I WAS TOLD IT WAS SHOT DOWN.
I GOT MANY PRAISES FOR THAT."
During my last missions, I developed my tactics a little more. I managed twice to lure enemy F-15s into SAM ambushes. First time they were not hit, but the second time an Eagle got hit and I was told it was shot down. I got many praises for that.
On June 11, the SyAAF changed its tactics once again, dispatching two huge formations - each consisting of a squadron-worth of fighter-bombers escorted by another squadron of interceptors. Several times, MiG-25s were also deployed at high speeds and levels, decoying the Israelis away from the strikers following at low levels. At least, this changed the situation in so that the Syrian interceptors kept the Israeli fighters busy, and, even if the first attack wave had to abort the mission, or suffer losses to Israelis, the second wave following closely behind would usually be able to take advantage of the complete chaos.
Apparently, this tactic enabled at least two larger Sukhoi Su-22 Fitter formations to break through and hit an Israeli MIM- 23 Hawk SAM site, as well as cause extensive damage to one of the armoured brigades battling the Syrian 3rd Armoured Division near the Beirut-Damascus road. During a melee in which F-15s and F-16s of the IDF/AF and Israeli SAMs claimed between five and seven Su-22s shot down, two MIG-23MS pilots apparently used the chaos to break away and surprise an ingressing Israeli formation. Captain Abdul Wahhab al-Kherat and one of the pilots from the al-Zoa'by family, claimed one F-4E each as shot down using R-3S missiles. According to Syrian sources, both pilots were subsequently shot down by Israeli F-15s, but they ejected safely and walked back to Syrian positions.
Without dispute, we made many mistakes in 1982, and many of our younger and less experienced pilots paid for these with their lives. Hut the Israelis were never in full control of the skies over Lebanon, and many Syrian pilots managed to dictate the rules of the battle. Heavy jamming and good planning applied by the other side caused us many problems, but the SyAAF was neither completely destroyed, nor neutralised, and it remained active right until the ceasefire at noon on June 11. |
0.96364 | How do I determine the correct size student instrument?
• Violin/Viola: Place the instrument under the chin in playing position. The instrument is the proper size if the palm and fingers of the performers left hand, with arm extended but elbow relaxed, can comfortably cup the scroll.
• Cello: Seat the student so that the knees are bent at a 90-degree angle. The upper rim of the instrument should rest on the sternum (breast bone) and the left knee should contact the curve below the lower bout corner. The neck of the cello should be a few inches from the performers shoulder, and the C string peg should be near the left ear. The students left hand should be able to reach both ends of the fingerboard with ease, and the first and fourth fingers of the left hand should be able to span a major third in 1st position (E to G# on the D string).
• Bass: With the student standing behind the bass in playing position, the fingerboard nut should be opposite the forehead near eye level. The right hand should be able to comfortably draw the bow from the frog to the tip. The first and fourth fingers of the left hand should be able to span a whole tone in 1st position (E to F# on the D string).
The chart below is a general guide based on age of the student.
*Subject to developmental variations among children of the same age.
How do I care for my Becker instrument?
• Keep the instrument clean. Wipe the body of the instrument, fingerboard and strings with a soft cloth after use. Do not clean the instrument with alcohol or water as these can cause damage to the wood and varnish.
• Do not subject the instrument to sudden changes in temperature, humidity or prolonged sunlight. Do not leave it in a car for any length of time and keep it away from heaters and air conditioning vents.
• Let the instrument have time to adjust to changes in temperature or humidity before opening the case.
• Handle the instrument by the neck and chinrest to minimize wear on the varnish.
• When tuning the instrument, gently twist the peg inward toward the pegbox to ensure good contact with the peg hole.
• Never over tighten a string to stretch it. Tighten it to pitch and no higher.
• Since the top of the bridge has a tendency to pull forward when strings are tightened, check that the back of the bridge remains perpendicular to the top of the instrument and that the bridge feet remain flush against the instrument.
• Use a soft cloth between the instrument and player to help protect the varnish from perspiration. Be aware that buttons and jewelry can cause damage to the instrument.
• Do not loosen the strings after playing.
• Remove the shoulder rest or shoulder pad from a violin or viola before placing it back in the case.
• Do not hang an instrument from a music stand or leave it on a chair. Return it to its case or a stringed instrument stand.
• Tighten the chin rest to the instrument with just enough pressure to hold it firmly in place. Too much pressure can cause damage to the instrument.
• Individual instrument humidifiers may be useful in areas of low humidity or during winter months. Use according to directions.
• Check each string adjuster under the tailpiece. Over time the adjuster screw may be turned in as far as it will go which can cause contact with, and damage to, the violin top. Before this occurs, turn the adjustment screw counter-clockwise to a safe level and re-tune the string with the peg.
• Occasionally check the shoulder rest feet to make sure that the rubber tubing has not worn through, which can damage the instrument.
• Periodically check the edges of cellos and basses. Rough edges can splinter when caught on clothing or carpets, causing increased damage to the instrument.
• Strings can be damaged if the grooves in the nut and bridge are not lubricated. To lubricate, use a very soft lead pencil.
• When replacing strings, remove and replace only one string at a time. This will keep pressure on the top of the instrument to prevent the soundpost from falling and will also keep the bridge in the proper position.
• The soundpost is held in place by string pressure. It is never glued into the instrument. If the soundpost falls down, immediately loosen the strings and do not play the instrument. Otherwise, the pressure of the strings could collapse the unsupported top. Take the instrument to a repair shop to have the soundpost refitted.
• If a crack develops or a seam opens up, keep the area clean and take the instrument to a qualified repairperson for repair.
• Do not attempt to adjust, repair or glue an instrument yourself. Take it to a repairperson for periodic check-ups and adjustments to avoid more costly repairs later on.
How do I care for my bow?
• Always hold the bow by the frog. Avoid touching the bow hair with the fingers since the natural oils from the skin prevent the bow hair from holding the rosin.
• Tighten the bow hair to a moderate tension prior to playing so that the curve of the stick remains concave. If you cannot get enough tension by adjusting the screw, the bow hair may need shortening or rehairing.
• Without rosin, the bow will not produce a sound from the instrument. Apply rosin evenly by drawing the bow hair over the rosin in smooth even strokes. Avoid the tendency to over-rosin the bow since too much rosin produces a harsh, coarse tone. It is not necessary to rosin the bow each time the instrument is played.
• Do not attempt to remove excess rosin by striking the bow hair against a hard object or swishing it in the air.
• Always loosen the bow hair after playing. This prevents stretching of the bow hair, reduces warping and helps the stick retains its elasticity.
• Clean the bow stick with a soft clean cloth after each use.
• Some insects and mites are attracted to bow hair. Keep the bow and case off the floor, especially in carpeted areas or closets.
• When the bow hair becomes uneven due to broken hairs the bow is more susceptible to warping. Take it to a repairperson to be rehaired.
• Periodically check the leather thumb grip near the frog and replace as necessary to prevent eroding the stick underneath.
How do I correct slipping or sticking pegs?
In time, normal use will cause the pegs and peg holes to wear. For slipping pegs, apply chalk to the areas of contact between the peg shaft and peg hole to provide more grip. For pegs that stick or are hard to turn, peg dope or soft graphite pencil lead can be applied to the contact areas. Eventually you may need to have replacement pegs fitted by a repairperson.
What if my bridge becomes displaced?
• The lowest side of the bridge fits under the E string on the violin, A string on the viola and cello and G string on the bass.
• With the lowest side of the bridge under the correct string, center the feet of the bridge between the inner notches of the f-holes.
• Position the bridge so that its back is perpendicular to the top of the instrument and the bridge feet fit flush.
What causes an instrument to buzz?
• Too much of the endpin retracted into the instrument on cellos and basses.
• Open seams or cracks in the instrument.
• Worn fingerboards that open up at the base of the neck.
An open seam can be located by holding the instrument by the neck and gently tapping it all around the top and back. Take the instrument to a qualified repairperson for repair. |
0.998893 | The EF 85mm f/1.2L II USM is so fast that... How fast is it? Shooting outdoors on a sunny day at ISO 100 with a Canon EOS 30D and the lens wide-open, the required exposure exceeded the camera's maximum shutter speed of 1/8000 sec. I had to stop down to f/1.6 to get proper exposure with the least possible depth of field. On a cloudy (really cloudy, not a "cloudy bright") day I was able to shoot wide-open at ISO 100 and get good exposures at 1/1600 sec, which produced tack-sharp images with a delightfully shallow depth of field.
Even at f/1.6, the EF 85mm f/1.2L II USM delivers good bokeh. This characteristic is generally considered to be a product of the aperture's shape (note the almost perfectly circular out-of-focus highlight) and spherical aberration that's inherently produced by a lens.
Bokeh is an optical buzzword derived from the Japanese word for "fool" (as in it's not nice to fool Mother Nature) and is used to describe the pleasing quality of an image's out-of-focus areas. A little more subjective than the Richter scale, most photographers know good bokeh when they see it, even if they don't know the term. At f/1.2, the EF 85mm f/1.2L II USM produces a pleasant bokeh.
For a while I stopped being a fan of Skylight, UV, or even protection filters, but putting a scuff mark on the front of my (expensive) EF 10-22mm zoom convinced me otherwise. Similarly, you'll want to invest in a high-quality 72mm Skylight (or whatever) filter to protect the front element of a $2000 lens like this one. While filter shopping you might also want to pick up a Neutral Density filter to let you use the lens at its widest aperture on sunny days. A lens hood is also a good idea, but while there's a nice pouch included in the box, the (ES-79II) lens hood is a $50 option.
To create this faux cyanotype I photographed Lorie using only the window light coming through my back door. (The cyanotype was invented by Sir John Herschel in 1842 and was the first successful non-silver photographic printing process. It's blue, hence the name.) Image was captured directly in monochrome using the Canon EOS 30D's blue toning capabilities. Exposure was 1/125 sec at f/2.8 at ISO 320. Camera was in Shutter Priority mode and deliberately underexposed by 1/3 stop to increase shadows and blue saturation.
To paraphrase Speed TV's Tom Hnatiw (www.dreamcargarage.com): Do you need a lens like this? If you are a professional photographer the stunning image quality Canon's EF 85mm f/1.2L II USM delivers is what you want and your clients expect. If you shoot weddings and portraits, the ability to capture luminous low-light portraits gives you an edge in capturing that decisive moment, and can make the different between a good shot and a great one. Do you want a lens like this? Oh yeah, but it's still heavy.
Max. Diameter x Length: 3.6x3.3"
For more information, contact Canon U.S.A., Inc., One Canon Plaza, Lake Success, NY 11042; (800) 652-2666, (516) 328-5000; www.canonusa.com. |
0.997236 | How much visibility do your project managers have into the costs and resources of new product development projects? Does your CFO have easy access to profit margins, budgets and real-time reporting?
While it’s common for manufacturers to use Microsoft Excel to track expenses, resource allocation and overall project progress, this process can overwhelm employees with unwieldy spreadsheets that always seem to be out of date. If you use Excel, your managers probably spend valuable time tracking down updates, analyzing if they’re best utilizing each employee and adjusting project timelines.
Project management and reporting can be much more efficient — especially by leveraging the capabilities of an enterprise resource planning (ERP) tool. Today we’re going to dive into one ERP solution — NetSuite — to discuss the financial and project management benefits it offers manufacturers, including improved visibility, decision making and savings.
One of the best features of NetSuite is that it’s built for use by multiple roles across a business. For example, your CEO and CFO can utilize the system to generate financial reports, while your project managers can set up projects and create tasks that individual users can work on.
Because NetSuite is cloud-based, it’s always updated with the latest information. This expands reporting capabilities, improves the accuracy of project details (compared to using multi-versioned Excel spreadsheets) and helps reduce excess time spent managing complex new development projects.
On the most basic level, a flexible project setup allows users to organize projects by a variety of options, such as tasks, milestones, end dates and start to finish. Project templates collect tasks into groups for easier organization and access, ultimately increasing efficiency and allowing employees to focus on product development — not tracking down updates or searching for the information they need to start their daily tasks.
On the reporting side, NetSuite is very robust and provides customized metrics and reports using real-time data. Its project dashboard automatically displays progress metrics such as work complete versus total work, work allocated and estimated work remaining, and it includes a Gantt Chart of progress for easy visualization. For those interested in the financials, NetSuite puts together a variety of reports, such as actual versus budget, costs calculated from employee labor rates and estimated profitability by project.
Generating up-to-date reports that improve the executive team’s decision-making capabilities is a huge benefit of using a cloud-based ERP. NetSuite also easily tracks resources, expenses and time, allowing managers to improve employee utilization and efficiency, as well as enhance visibility into project profitability. Overall, manufacturers can utilize NetSuite’s features to reduce project costs, increase the number of projects delivered on time and on budget, and improve the accuracy of their data.
From real-time analytics to cleaner, more standardized processes, NetSuite offers a strong argument for utilizing an ERP over manual processes and tools such as Excel. But manufacturers should carefully consider technology investments to help ensure a significant return on investment. Utilizing a third party familiar with both the technology and the manufacturing industry can be crucial to success.
At Wipfli, our first goal is to understand your business. By asking the right questions, we get to the root of your challenges and identify your needs. From there, our NetSuite-certified team members build the ERP system around your organization and its process needs in order to gain the best performance improvement.
Ready to improve your efficiencies and bottom line? Contact Wipfli to learn more about how NetSuite’s project management capabilities can align with your business goals, and to get started with a complementary NetSuite demonstration.
A venerable manufacturer receives a restructured and improved IT operations thanks to a thorough assessment, astute planning, sweeping outsourced services, and mentoring that ultimately leads to less outsourcing and more in-house capabilities. |
0.922195 | A new parasitic worm has been found in Florida. Researchers said the parasite is called a rat lungworm, which lives in rats and snails in Florida and can cause infections in people's brains.
According to Live Science, the rat lungworm was previously seen in Southern Florida, and the study is the first one to show just how damaging it is to people across the state. The researchers said that the parasite, which can usually be found in the tropics, recently appeared in the continental United States, where it is expected to continue spreading.
Human cases of rat lungworm is not new. In fact, they were known to have occurred in Hawaii for over 50 years, but did not make it to the continental United States until the mid-1980s. It first showed up in New Orleans, likely from the ships that arrived from areas already inhabited by rats. Later on, it also appeared in Louisiana and Texas.
Due to their nature, the parasites have an alarming ability to thrive in areas outside its historical range. The parasite is said to carry its life cycle in rats, snails and slugs. People can be infected by the disease if they eat raw or undercooked snails and slugs, or if they eat food that has been contaminated. Angiostrongylus cantonensis, as the parasite is known, can infect the brain and cause meningitis to those who have been infected. Other symptoms include headaches, neck stiffness, nausea, vomiting and abnormal sensations in their limbs.
A new study published in PLOS One found that rat and snail samples from Alachua, Leon, St. Johns, Orange and Hillsborough tested positive for the parasite, but it is likely to be more widespread. Researchers found that the parasite also does not seem picky about the types of snails it infects, which is why they recommend washing produce to prevent infections. Children should also avoid eating snails and remember to wash their hands after handling them. |
0.834742 | Barcelona's a great holiday destination for many reasons - from its pretty beaches to its stunning architecture. You'll also find restaurants serving fine international cuisine, but I think it'd be a mistake not to sample some of the local dishes while you're there. As it's the capital of Catalonia, there's no better place to get an authentic taste of the region's cuisine. Read on to find out more about what the best dishes the city has to offer are and - perhaps even more importantly - where you can try them!
Seafood features heavily in Catalan cuisine, with esqueixada being a signature dish that consists of dried and salted cod. This traditional salad also contains olives, tomatoes and onions, although depending on where in the city you go you might find other ingredients are added. Esqueixada is typically eaten during the summer months and one place where you can try is the historic Can Cortada restaurant on Avenue Estatut de Catalunya. This establishment has been serving traditional fare since 1994; however, the building actually dates back to the 11th century when it was constructed to defend the region from feudal attacks.
Given its name, it should be pretty obvious that Catalan broad beans forms part of the local cuisine, but don't think that the dish consists solely of pulses. It also includes pork belly and butifarra - a type of grilling sausage - as well as tomatoes and a range of herbs. You'll typically find it is eaten as a starter, although it can also be a tapas dish. Head to Restaurant Agut, which is a short distance from the stunning La Barceloneta beach, and you can try some authentic Catalan broad beans, as well as other traditional delicacies. This family-run establishment opened in 1924 and would later become a popular eating spot among painters in the 1950s and 1960s. Many of the works that artists gave to the owners in exchange for food still hang on the walls, so it's not only a great place to dine, but also to learn more about local culture.
Another great Catalan dish worth checking out is escalivada, a salad which locals tend to have as a starter or an accompaniment to a main meal. You can try this by setting out from your cheap hotel in Barcelona to the Cal Pinxo Palau de Mar restaurant in Port Vell, where you'll discover it is served with shredded cuttlefish. The dish also features roasted vegetables which - if cooked in the true Catalan style - should have a smoky flavour to them. Escalivar means 'to roast over ashes or embers' in English. Once you've finished the salad, make sure you try some of Cal Pinxo Palau de Mar's other offerings, such as Basque hake and seafood paella. Alternatively, you could head to the El Gran Cafe to try escalivada. Located near the Mayor House, this restaurant is decorated in the Belle Epoque style and also serves Andalucian squid and traditional Catalan mousse with chocolate fondant, perfect if you've got a sweet tooth!
I've only mentioned a few of the great Catalan dishes and eateries that there are to choose from - but which local specialities do you like the most? Say what your favourites are below. |
0.99185 | The microbial nitrogen cycle is one of the most complex and environmentally important element cycles on Earth and has long been thought to be mediated exclusively by prokaryotic microbes. Rather recently, it was discovered that certain eukaryotic microbes are able to store nitrate intracellularly and use it for dissimilatory nitrate reduction in the absence of oxygen. The paradigm shift that this entailed is ecologically significant because the eukaryotes in question comprise global players like diatoms, foraminifers, and fungi. This review article provides an unprecedented overview of nitrate storage and dissimilatory nitrate reduction by diverse marine eukaryotes placed into an eco-physiological context. The advantage of intracellular nitrate storage for anaerobic energy conservation in oxygen-depleted habitats is explained and the life style enabled by this metabolic trait is described. A first compilation of intracellular nitrate inventories in various marine sediments is presented, indicating that intracellular nitrate pools vastly exceed porewater nitrate pools. The relative contribution by foraminifers to total sedimentary denitrification is estimated for different marine settings, suggesting that eukaryotes may rival prokaryotes in terms of dissimilatory nitrate reduction. Finally, this review article sketches some evolutionary perspectives of eukaryotic nitrate metabolism and identifies open questions that need to be addressed in future investigations.
Nitrate is one of the major nutrients for microbial and plant life on planet Earth. It is the most oxidized form of fixed N-compounds, abundant in many aquatic habitats, and of high importance for both assimilatory and dissimilatory nitrogen metabolism.
Nitrate can also be stored inside living cells at concentrations by far exceeding ambient concentrations, a trait that is known for several prokaryotic and eukaryotic phyla (e.g., Dortch et al., 1984; Fossing et al., 1995; McHatton et al., 1996; Schulz et al., 1999; Lomas and Glibert, 2000; Needoba and Harrison, 2004; Risgaard-Petersen et al., 2006; Mußmann et al., 2007; Piña-Ochoa et al., 2010a; Kamp et al., 2011; Bernhard et al., 2012a; Coppens et al., 2014; Stief et al., 2014).
Apparently, the ability to store nitrate intracellularly is widely distributed within the eukaryotic tree of life (Figure 1). Extra- and intracellular nitrate serves for both assimilation and dissimilation. In assimilatory nitrate reduction, ammonium is produced and subsequently incorporated into biomass to build up e.g., proteins and nucleic acids. Dissimilatory nitrate reduction is a process for energy conservation, in which nitrate is used as an electron acceptor in the (near) absence of oxygen (e.g., Fewson and Nicholas, 1961; Strohm et al., 2007; Kraft et al., 2011; Thamdrup, 2012). Dissimilatory nitrate reduction and nitrate storage in particular are physiological life traits that provide microbes with environmental flexibility (i.e., metabolic activity under both oxic and anoxic conditions) and resource independence (i.e., anaerobic metabolism without immediate nitrate supply), respectively. Such life traits are especially important in environments that are temporarily anoxic and/or nitrate-free and they may have developed as a “life strategy” in both prokaryotes and eukaryotes (Figure 2).
Figure 1. Schematized eukaryotic tree of life emphasizing the wide distribution of lineages known to store nitrate intracellularly and use it for dissimilation. Maximum intracellular nitrate concentrations (max. ICNO3) and pathways of dissimilatory nitrate reduction (DNR) are given for taxonomic groups tested positive for either trait. DNRA, Dissimilatory Nitrate Reduction to Ammonium. DNR data compiled from Finlay et al. (1983) (ciliates), Shoun and Tanimoto (1991) (fungi), Zhou et al. (2002) (fungi), Risgaard-Petersen et al. (2006) (foraminifers), Kamp et al. (2011) (diatoms). Max. ICNO3 data compiled from Dortch et al. (1984) (dinoflagellates, haptophytes), Lomas and Glibert (2000) (chlorophytes), Piña-Ochoa et al. (2010a) (gromiids), Piña-Ochoa et al. (2010b) (foraminifers), Kamp et al. (2011) (diatoms), Stief et al. (2014) (fungi). Tree topology adapted from Worden et al. (2015).
Figure 2. Marine microbial eukaryotes known to take up ambient nitrate for intracellular storage and/or dissimilatory nitrate reduction drawn in a conceptual scheme of a natural environment with oxic and anoxic compartments. Organisms and environmental compartments are stylized and not to scale. Scenarios: (1) Benthic diatoms and foraminifers migrate actively up and down between oxic and anoxic sediment layers, or are buried in deep, anoxic sediment layers by e.g., macrofaunal activities, (2) Foraminifers move through different sediment layers and might re-fill their nitrate stores at “hotspots” of nitrate in deeper sediment layers, e.g., macrofaunal burrows, (3) Gromiids reside at the sediment surface or in anoxic subsurface layers, (4) Fungi grow in various sediment layers, (5) Pelagic diatoms sink onto the sediment after phytoplankton blooms and are re-suspended due to spring storms or macrofaunal activities, and (6) Pelagic diatoms are exposed to hypoxic or anoxic conditions inside sinking diatom-bacteria aggregates. Aside from the spatial separation into oxic and anoxic compartments, temporal variation of oxygen availability in the bottom water or inside macrofaunal burrows causes sudden shifts from oxic to anoxic conditions (and back) that may influence nitrate uptake and dissimilatory nitrate reduction by microbial eukaryotes (not shown).
Dissimilatory nitrate reduction pathways such as nitrate reduction to nitrite, denitrification, or Dissimilatory Nitrate Reduction to Ammonium (DNRA; Box 1), are well-studied in prokaryotes (i.e., Bacteria and Archaea). Prokaryotes are an integral part of the microbial nitrogen cycle; and due not least to the increasing use of fertilizers and the subsequent pollution of rivers, estuaries, and coastal waters, they have been the focus of many research activities for nearly a 100 years (e.g., Kluyver and Donker, 1926; Fewson and Nicholas, 1961; Zumft, 1997). In contrast, research on eukaryotes that can switch from oxygen to (intracellular) nitrate for anaerobic energy metabolism when (temporarily) exposed to anoxia or hypoxia is still in its infancy. However, the list of eukaryotes so far found to reduce nitrate dissimilatorily includes major global players such as benthic and pelagic marine diatoms, foraminifers, and fungi (Figure 1), which suggests a quantitative impact on nitrogen cycling at least in the marine realm. The first eukaryote that has been found to respire with nitrate in the absence of oxygen was, however, the freshwater ciliate Loxodes sp., which survives anoxic conditions in lakes through nitrate reduction to nitrite (Finlay et al., 1983). Later on, soil fungi were shown to be capable of incomplete denitrification to nitrous oxide and nitrate reduction to ammonium (Box 1; Shoun and Tanimoto, 1991; Takaya et al., 1999). Further, the fungus Aspergillus terreus isolated from a seasonal oxygen minimum zone (OMZ) in the Arabian Sea was shown to perform DNRA under anoxic conditions (Stief et al., 2014). In 2006, it was demonstrated that certain benthic foraminiferal species perform intracellular accumulation of nitrate, which is subsequently denitrified (Risgaard-Petersen et al., 2006). Only recently, it was discovered that the most important phototrophic group of microbial eukaryotes, the diatoms, also possess a dissimilatory nitrate metabolism. Both a benthic and a pelagic diatom have been shown to perform DNRA after sudden shifts to darkness and anoxia (Kamp et al., 2011, 2013).
Box 1. Pathways of dissimilatory nitrate reduction in eukaryotes.
Denitrification: Reduction of nitrate via nitrite, nitric oxide, and nitrous oxide to dinitrogen (NO3−→ NO2−→ NO → N2O → N2) with organic or inorganic electron donors. Complete denitrification serves as an efficient N-removal pathway in the environment. Incomplete denitrification may begin and end at various points in the reaction sequence and, for example, produce the greenhouse gas N2O.
Dissimilatory Nitrate Reduction to Ammonium (DNRA): Reduction of nitrate via nitrite to ammonium (NO3−→ NO2−→ NH4+) with organic or inorganic electron donors. DNRA does not contribute to N-removal, but rather recycles fixed N in the environment.
Ammonia Fermentation: Reduction of nitrate to ammonium coupled to the oxidation of organic electron donors to acetate and substrate-level phosphorylation. Ammonia fermentation has the same effect on N-cycling in the environment as DNRA.
This review summarizes the current knowledge of (i) the ecophysiology of marine microbial eukaryotes that store nitrate intracellularly and thus are potentially involved in dissimilatory nitrate reduction and (ii) the environmental impact of nitrate storage and dissimilatory nitrate reduction by marine microbial eukaryotes on the nitrogen cycle of our oceans. The review further evaluates some evolutionary perspectives, and closes with a more general discussion of open questions that might inspire further research.
Among the marine eukaryotic microbes discussed here, intracellular nitrate storage was first described for diatoms, in which intracellular concentrations can by far exceed ambient concentrations (i.e., nitrate in the surrounding seawater). The range of intracellular nitrate concentrations varies according to species and environmental conditions. Many benthic and pelagic diatoms accumulate nitrate intracellularly in concentrations up to a few 100 mM, but concentrations can be as low as zero to a few mM (Dortch et al., 1984; Lomas and Glibert, 2000; Needoba and Harrison, 2004; Høgslund, 2008; Kamp et al., 2011, 2013; Coppens et al., 2014).
Nitrate uptake by diatoms has been shown to be temperature-dependent (decreasing with increasing temperature), is so far documented to occur in oxic conditions only, and uptake rates vary between species (e.g., Raimbault and Mingazzini, 1987; Lomas and Glibert, 1999, 2000; Villareal et al., 1999; Tantanasarit et al., 2013). Lomas and Glibert (2000) measured nitrate uptake rates of 18–310 fmol NO3− cell−1 h−1 for six different species grown at moderate ambient nitrate concentrations (< 40 μM). This rate can be even higher at very high ambient nitrate concentrations (Tantanasarit et al., 2013).
Intracellular nitrate is generally thought to be located in vacuoles. In the plant vacuole of Arabidopsis thaliana, for example, nitrate is accumulated via an NO3−/H+ exchanger (Martinoia et al., 1981; De Angeli et al., 2006). Evidence for a similar mechanism in unicellular eukaryotes is missing so far.
Nitrate storage in diatoms has long been assumed to serve assimilation exclusively (e.g., Dortch et al., 1984; Lomas and Glibert, 2000), probably because diatoms are mostly found in oxic habitats where nitrate is not needed as an alternative electron acceptor for dissimilation. The first hint for dissimilatory nitrate reduction in diatoms was a correlation between the nitrate storage capacity and the survival time of benthic and pelagic diatoms after sudden shifts to dark and anoxic conditions (Kamp et al., 2011). However, intracellular nitrate is used up within hours and is not replenished from ambient nitrate under anoxic conditions (see below). Lomas and Glibert (1999) also hypothesized that some diatom populations take up nitrate in excess of nutrient requirements because its reduction may serve as a sink for electrons during transient periods of imbalance between light energy harvesting and utilization.
Foraminifers may store nitrate at concentrations >15,000 times the environmental nitrate concentrations (Risgaard-Petersen et al., 2006), probably in vacuoles (Bernhard et al., 2012a), and the measured intracellular nitrate pool varies among nitrate-storing foraminifers from ca. 0.1 mM to >375 mM (Piña-Ochoa et al., 2010a). This variation seems to reflect different physiological and environmental conditions rather than phylogenetic constraints because considerable intraspecific variation is observed among the species that are considered to be nitrate collectors (Piña-Ochoa et al., 2010a; Koho et al., 2011; Bernhard et al., 2012b). The ability to store nitrate at concentrations above environmental concentrations is so far found within the orders Allogromiida, Miliolida, Rotaliida, and Textulariida (Piña-Ochoa et al., 2010a; Bernhard et al., 2012a), and seems to be a common trait for foraminifers from very diverse benthic marine environments, such as OMZs, hypoxic basins, continental slopes, shelf sediments, and coastal sediments (Piña-Ochoa et al., 2010a; Bernhard et al., 2012a). Interestingly, the trait is not restricted to species which often occur in anoxic microhabitats, such as e.g., Globobulimina turgida, but is also found in species living in oxic habitats (e.g., Cassidulina carinata and Pyrgo elongata), and in opportunistic species (e.g., Bolivina subaenariensis and Uvigerina mediterranea; Piña-Ochoa et al., 2010a). 15N labeling experiments performed on Ammonia beccarii, Bolivina argentea, Buliminella tenuata, G. turgida*, Fursenkoina cornuta and Nonionella stella have shown that nitrate is taken up directly from the environment (Risgaard-Petersen et al., 2006; Koho et al., 2011; Bernhard et al., 2012b; Nomaki et al., 2014), and is not produced internally. The mechanism for nitrate uptake as well as the storage mode is at present unknown, but it must involve an active transport system, as nitrate is moved across the cell membrane against a large concentration gradient. From a thermodynamic point of view, the process of nitrate uptake is exergonic and therefore requires an investment of energy by the organism. G. turgida*, for instance, may accumulate nitrate internally to well above 10 mM in environments where the environmental concentration is less than 20 μM (Risgaard-Petersen et al., 2006). The Gibbs free energy (ΔG) for nitrate transport across the cell membrane at these conditions is >+15 kJ mol−1 NO3− according to equations in Harold (1986). It is evident therefore that nitrate accumulation among foraminifers can only be a sustainable strategy, if required to sustain processes that are essential for the survival of the organism. It has been shown that the nitrate-respiring foraminifer G. turgida can survive for up to 56 days of anoxia from respiration of its internal nitrate pool (Piña-Ochoa et al., 2010b), and the building and maintenance of an intracellular nitrate pool might be seen as an insurance that enables the organisms to sustain an active metabolism even when suitable external electron acceptors are absent in the environment (see below).
Like the foraminifers, gromiids belong to the Rhizaria (Burki et al., 2010; Sierra et al., 2013), and their intracellular nitrate concentrations can also reach >100 mM, which exceeds the ambient nitrate concentration by several orders of magnitude (Piña-Ochoa et al., 2010a). The ability to accumulate nitrate at these high concentrations appears to be ubiquitous for the gromiids, as it has been found for individuals sampled from hard-bottom substrates, shelf sediments in temperate and arctic regions, as well as in the OMZ along the coastline of Peru (Piña-Ochoa et al., 2010a). The physiology of the gromiids has only been superficially studied and neither the mechanism behind nitrate accumulation, nor its link to any metabolic pathway has been investigated so far. It is possible that the gromiid-nitrate association represents a system that is functionally different from that of benthic foraminifers because gromiids generally are described as surface dwellers, and thus not buried in anoxic sediment layers like many foraminifers (Jepps, 1926; Hedley and Bertaud, 1962; Arnold, 1972; Matz et al., 2008; da Silva and Gooday, 2009; Rothe et al., 2011).
To date, only a single strain of A. terreus isolated from a marine sediment has been shown to store nitrate intracellularly and use it for dissimilatory nitrate reduction mainly to ammonium (Stief et al., 2014). The intracellular nitrate concentration in this strain reached up to 0.4 mM. Unfortunately, intracellular nitrate storage has not been studied in the large number of soil fungi and yeasts capable of dissimilatory nitrate reduction (Takaya et al., 1999; Maeda et al., 2015). Fungi in general, however, do possess cellular vacuoles and nitrate transporters (Klionsky et al., 1990; Navarro et al., 2006) and are able to take up nitrate from the environment at high rates and store it in vacuoles (e.g., 9 nmol NO3− mg−1 dry weight min−1 in Aspergillus nidulans; Unkles et al., 2004).
For the only ciliate known to perform dissimilatory nitrate reduction, Loxodes sp. (Finlay et al., 1983), intracellular nitrate storage has not been reported.
Marine phytoplankton belonging to these eukaryotic lineages has mainly been investigated with respect to uptake and assimilation of nitrate and ammonium, but intracellular nitrate storage is also reported occasionally (e.g., Dortch et al., 1984; Lomas and Glibert, 2000). The chlorophyte Dunaliella tertiolecta stored nitrate at 2.7–4.9 mM in one study (Lomas and Glibert, 2000), but had intracellular nitrate concentrations below the detection limit in another study (Dortch et al., 1984). Similarly, the dinoflagellate Amphidinium carterae stored 0–1.8 mM nitrate (Dortch et al., 1984), while the dinoflagellate Prorocentrum minimum did not store nitrate (Lomas and Glibert, 2000). Among the haptophytes, Isochrysis galbana stored 0.3–13.9 mM nitrate (Dortch et al., 1984) and Pavlova lutheri only stored 0.1–0.2 mM nitrate (Lomas and Glibert, 2000). Clearly, more investigations focusing on intracellular nitrate storage in these lineages are needed.
So far, the benthic diatom Amphora coffeaeformis and the pelagic diatom Thalassiosira weissflogii have been shown to reduce nitrate dissimilatorily. Both diatom strains perform the pathway Dissimilatory Nitrate Reduction to Ammonium (DNRA), as demonstrated with 15N labeling experiments in axenic strains (Kamp et al., 2011, 2013). The DNRA rates of these two diatoms are in the range of 2–3 fmol N cell−1 h−1 during the first hours after exposure to dark and anoxic conditions. However, DNRA rates become significantly lower after only a few hours, which mirrors the rapid consumption of intracellular nitrate after shifts to darkness and anoxia. Thus, diatoms probably use the intracellular nitrate, and its dissimilatory reduction via DNRA, either for short-term survival or for entering a resting stage.
To date, genes involved in dissimilatory nitrate reduction have not been identified in diatoms, but only in denitrifying soil fungi (see below). Intriguingly, fungi use enzymes that are usually involved in assimilatory nitrate reduction in a dissimilatory mode (Takasaki et al., 2004). This could also be true for diatoms. Assimilatory nitrate reductases, nitrate transporters, and components of a nitrate-sensing system have only recently been identified in diatom genomes (Armbrust et al., 2004; Bowler et al., 2008). Identification of functional genes involved in dissimilatory nitrate reduction in diatoms would provide genetic evidence for this metabolic pathway in diatoms.
Direct measurements of nitrate reduction activity associated with nitrate-storing foraminifers have demonstrated a capacity for complete denitrification of NO3− to N2 (Risgaard-Petersen et al., 2006; Høgslund, 2008; Piña-Ochoa et al., 2010a; Bernhard et al., 2012b). Some species (e.g., Bolivina plicata, Bolivina seminuda, Valvulineria cf. laevigata, Stainforthia sp.), however, seem to lack nitrous oxide reductase and reduce nitrate only to nitrous oxide (Piña-Ochoa et al., 2010a).
At present, denitrification rates for only 11 different species within the Rotaliida order have been determined. The observation of elevated δ15NNO3 and δ18ONO3 values in the intracellular nitrate pool within allogromiid foraminifers from the Santa Barbara Basin (Bernhard et al., 2012a) has demonstrated nitrate reduction capacity associated with members of the Allogromiida order, yet rate measurements in this order are still missing. Rates estimated for the Rotaliida with N2O-microsensors (Risgaard-Petersen et al., 2006; Høgslund et al., 2008; Piña-Ochoa et al., 2010a,b) or 15NO3− amendments (Risgaard-Petersen et al., 2006; Bernhard et al., 2012b) fall in the range of 1.7–83 pmol N cell−1 h−1, and great intraspecific variation is observed. There is a tendency for a log-log relationship between the denitrification rate and biovolume of the organisms, so that large organisms have higher rates than smaller ones, as seen also for foraminiferan cell-specific oxygen respiration rates (Geslin et al., 2011), but the current database is too limited for strong conclusions to be drawn. In general, individual denitrification rates are much lower than the corresponding oxygen respiration rates (Piña-Ochoa et al., 2010a) and it has been suggested that denitrification is an auxiliary metabolism used for cell maintenance, food collection, and locomotion during temporary stays in oxygen-free environments, whereas oxygen might be required for growth and reproduction (Piña-Ochoa et al., 2010a,b).
The genes behind the foraminiferan denitrification pathway have not been elucidated. It has, however, been shown with microscopy (Risgaard-Petersen et al., 2006) and experiments applying bacteria-specific antibiotics to denitrifying foraminifers (Bernhard et al., 2012b) that for some species (e.g., B. argentea and G. turgida*) the foraminifers themselves, and not only the associated prokaryotes, are performing the denitrification reaction. Denitrification in a nitrate-storing allogromiid foraminifer from the Santa Barbara Basin, however, appears to be performed by prokaryotic endobionts and not the eukaryote, as demonstrated by sequence analyses and GeneFISH (Bernhard et al., 2012a). Given the widespread distribution of nitrate-accumulating and denitrifying foraminifers within diverse phylogenetic orders, specific investigations of each group are needed, at best on the genomic level, to confirm or reject the presence of eukaryotic denitrification. It is obvious from the Santa Barbara study that a capacity for nitrate accumulation is not necessarily coupled to a capacity of the eukaryote to utilize this directly for energy conservation through e.g., denitrification.
The best-studied fungi species capable of dissimilatory nitrate reduction are the two soil-living plant pathogens Fusarium oxysporum and Cylindrocarpon tonkinense (Shoun and Tanimoto, 1991; Usuda et al., 1995). The majority of terrestrial fungi, some ectomycorrhizal fungi, and many of the yeast strains screened since the initial discovery of “fungal denitrification” also tested positive for this trait (Tsuruta et al., 1998; Prendergast-Miller et al., 2011; Mothapo et al., 2013; Maeda et al., 2015). A key feature of “fungal denitrification” is the absence of the last reduction step of the denitrification pathway, which makes fungi very important nitrous oxide (N2O) producers in soils (Laughlin and Stevens, 2002; Crenshaw et al., 2008; Chen et al., 2014). Fungi isolated from aquatic ecosystems have received much less attention in terms of dissimilatory nitrate reduction, and conclusive experiments with 15NO3− labeling have been made for only one single strain of A. terreus isolated from sediment in the seasonal oxygen minimum zone of the Arabian Sea (Stief et al., 2014). This strain also has a high N2O yield (approximately 15% of the total amount of N produced), but the main product of its nitrate reduction activity is ammonium (up to 83%, equivalent to 175 nmol N g−1 protein h−1), which is also the case for a number of soil fungi (Zhou et al., 2002). The underlying metabolic pathway has been termed “ammonia fermentation” a process which couples the oxidation of ethanol to acetate, and the reduction of nitrate to ammonium, to substrate-level phosphorylation (Box 1; Takaya, 2009). Thus, two pathways of dissimilatory nitrate reduction have evolved in fungi, apparently also within individual species (e.g., F. oxysporum; Zhou et al., 2002). The prevalence of either pathway is controlled by ambient oxygen levels, with hypoxic and anoxic levels triggering “fungal denitrification” and “ammonia fermentation,” respectively (Takaya, 2009).
Key genes of “fungal denitrification” have been identified and sequenced (Kizawa et al., 1991; Kim et al., 2009). Nitrite reduction to nitric oxide (NO) is mediated by a copper-containing nitrite reductase (NirK), while the reduction step from NO to N2O is mediated by the cytochrome P450 nitric oxide reductase (P450nor). Nitrous oxide reductases are generally absent in fungi, which explains why N2O instead of N2 is the final product of “fungal denitrification” (Takaya, 2009). Dissimilatory nitrate reductases can be present in some denitrifying fungi species, but are less well-characterized than NirK and p450nor (Takaya, 2009). Hence, the minimal denitrification pathway in fungi only comprises the two-step reduction of nitrite to N2O. The stepwise reduction of nitrate to ammonium in fungal “ammonia fermentation” is apparently mediated by assimilatory nitrate (NiaD) and nitrite reductases (NiiA) used in a dissimilatory context, i.e., it is coupled to the fermentation of ethanol to acetate (Takasaki et al., 2004). Meanwhile, several primer sets for the fungal NirK and p450nor have been developed and used for screening fungal isolates for their capability of dissimilatory nitrate reduction (Kim et al., 2010; Maeda et al., 2015; Mothapo et al., 2015; Wei et al., 2015). The availability of these primer sets will enable the detection of denitrifying fungi in environmental samples without prior isolation, cultivation, and functional testing.
The freshwater ciliate Loxodes sp. survives anoxic conditions in lakes through dissimilatory nitrate reduction to nitrite (Finlay et al., 1983; Aleya et al., 1992). A link was made between the anaerobic metabolism of the ciliate and a higher number of mitochondria per cell and a greater surface area of cristae inside the mitochondria compared to specimens exposed to oxic conditions (Finlay et al., 1983; Finlay, 1985). To date, the gene encoding the nitrate reductase has not been identified.
None of the aforementioned nitrate-storing representatives of these eukaryotic lineages (see Nitrate Storage) has been tested for dissimilatory nitrate reduction under anoxic conditions so far.
Dissimilatory nitrate reduction by prokaryotes and eukaryotes typically occurs in environments in which the availability of oxygen and nitrate is variable in space and time. Stable environments are spatially structured into zones with/without oxygen and/or nitrate availability, while dynamic environments are temporally structured into phases with/without oxygen and/or nitrate availability. In aquatic ecosystems, such conditions can be found in sediments, around animal burrows in sediments, in the root zone of aquatic plants, in low-oxygen water bodies, and inside sinking organic aggregates.
In sediments with stable redox stratification, oxygen, as the most favorable electron acceptor in terms of energy, is consumed within the top few millimeters (Revsbech et al., 1980). Nitrate penetrates slightly deeper into the sediment where it is used as an alternative electron acceptor when oxygen is depleted (Sweerts and de Beer, 1989). Microbes that are able to store nitrate intracellularly may thrive well below the nitrate penetration depth, but need to fill up their nitrate stores occasionally. Large sulfur bacteria couple vertical migration behavior to uptake and storage of nitrate at the sediment surface and dissimilatory use of intracellular nitrate deeper in the sediment (e.g., Fossing et al., 1995). Benthic foraminifers and diatoms, both of which are capable of migrating inside sediments, can be abundant well below the nitrate penetration depth (Figure 2; Risgaard-Petersen et al., 2006; Stief et al., 2013). Benthic diatoms exhibit a vertical migration rhythm that is coupled to diurnal and tidal cycles (Consalvey et al., 2004), while foraminifers migrate more erratically or directed to oxygen gradients (Alve and Bernhard, 1995; Geslin et al., 2004; Koho et al., 2011). As a consequence of their migration behavior, benthic diatoms and foraminifers are exposed to elevated ambient nitrate and oxygen levels whenever they reach the sediment surface. Deeper in the sediment where they find shelter from predation and erosion (Kingston, 1999), diatoms and foraminifers face the absence of ambient nitrate and oxygen.
Oxygen and nitrate concentration gradients in sediments can experience rapid and pronounced changes caused by disturbance events. Short-term oxygen and nitrate pulses occur in animal burrows that reach deep into anoxic sediment layers and are intermittently irrigated with oxygen- and nitrate-rich surface water (Kristensen et al., 1991; Wenzhöfer and Glud, 2004). During the resting phase of the animals, oxygen is depleted faster than nitrate, which allows for short-term dissimilatory nitrate reduction in the immediate surrounding of the burrow (Stief and de Beer, 2006). The root zone of aquatic plants exhibits oxygen and nitrate dynamics that are similar to animal burrows, albeit due to periodic changes in the photosynthetic activity of the plant (Frederiksen and Glud, 2006). In the light, roots release oxygen into the surrounding sediment and stimulate nitrate production by microbial nitrification, while in the dark, nitrate is depleted due to dissimilatory nitrate reduction activities (Risgaard-Petersen and Jensen, 1997). It has been suggested that the nitrate-storing and sulfide-oxidizing Thioploca ingrica are able to exploit nitrate pulses in animal burrows to fill up their nitrate stores (Høgslund et al., 2010). Benthic diatoms and foraminifers are often abundant in this dynamic microenvironment in which conditions conducive to nitrate uptake and dissimilatory nitrate reduction alternate (Alve and Bernhard, 1995; Steward et al., 1996).
Hypoxic or anoxic water bodies in which nitrate is available may also host dissimilatory nitrate reduction mediated by eukaryotes. The ciliate Loxodes sp. is abundant just below the oxic-anoxic interface of stratified lakes, where it reduces nitrate dissimilatorily to nitrite (Finlay et al., 1983; Aleya et al., 1992). Marine pelagic diatoms can move up and down through the water column by controlling their buoyancy (Armbrust, 2009) and are thereby exposed to varying ambient nitrate and oxygen levels (Villareal et al., 1993). Rapid and large-scale transport of diatoms through the water column of the oceans occurs when diatoms and bacteria aggregate to form “marine snow” (Thornton, 2002). Sinking organic aggregates also exhibit internal gradients of oxygen concentration due to microbial respiration and transport limitation of oxygen (Ploug et al., 1997). Under dark conditions (i.e., at night or when aggregates sink out of the photic zone) the center of aggregates may become anoxic, which allows for dissimilatory nitrate reduction (Klawonn et al., 2015). Pelagic diatoms finally sink onto the seafloor, where they still host a large inventory of intracellular nitrate (Lomstein et al., 1990) and may survive in dark, anoxic sediment layers for decades (Härnström et al., 2011), thus far longer than the intracellular nitrate pool would last.
The presence of nitrate-storing eukaryotes in sediments leads to large inventories of nitrate that vastly exceed the porewater nitrate contents in some environments, equivalent to what can be observed in sediments colonized with the sulfide-oxidizing bacteria Thioploca sp. or Beggiatoa sp. (Jørgensen and Gallardo, 1999; Sayama, 2001). These intracellular nitrate pools (ICNO3 pools) are measured with a diverse set of methods, such as freeze-thaw cycling, boiling, whole-core squeezing, and centrifugation of environmental samples, which all aim at lysing nitrate-storing cells (e.g., Lomstein et al., 1990; Risgaard-Petersen et al., 2006; Prokopenko et al., 2011; Larsen et al., 2013). In sediments from the Gullmar Fjord, Sweden, for instance, nitrate dissolved in the sediment porewater (PWNO3) accounted for less than 4% of the total nitrate pool (Risgaard-Petersen et al., 2006). The remaining nitrate, as extracted by boiling the sediment, was most likely present in eukaryotic cells since neither Beggiatoa, nor Thioploca was present. The sediment was inhabited by the nitrate-storing foraminifer Globobulimina pseudospinescens, and the cell-bound nitrate was significantly correlated with the abundance of this organism. However, the intracellular nitrate pool of G. pseudospinescens only accounted for approximately 20% of total nitrate in the sediment, leaving open the possibility that other nitrate-storing foraminifers or diatoms, gromiids, and fungi were present. Settled phytoplankton with nitrate-storing representatives among the diatoms, chlorophytes, dinoflagellates, and haptophytes may contribute to the sedimentary ICNO3 pool that is not accounted for by benthic eukaryotes and prokaryotes. Meanwhile, large sedimentary ICNO3 pools were ascribed to the presence of benthic foraminifers in various marine ecosystems (Figures 3A, 4; Table S1). Intracellular nitrate pools were ~4–26 times larger than porewater nitrate pools (Figure 4; Glud et al., 2009; Prokopenko et al., 2011; Glock et al., 2013; Larsen et al., 2013). In three additional studies, it was assumed that both foraminifers and/or diatoms contribute to the ICNO3 pool in marine sediments (Figures 3A,B, 4; Table S1; Høgslund et al., 2010; Marchant et al., 2014; Papaspyrou et al., 2014).
Figure 3. Vertical profiles of porewater nitrate (PWNO3) and intracellular nitrate (ICNO3) in (A) foraminifer-inhabited and (B) diatom-inhabited marine sediments. Nitrate concentrations are expressed per cm3 of sediment. Note different scales. Data compiled from (A) Risgaard-Petersen et al. (2006) and (B) Heisterkamp et al. (2012).
Figure 4. Inventories of porewater nitrate (PWNO3) and intracellular nitrate (ICNO3) in various marine sediments. Only studies in which the sedimentary ICNO3 pool is ascribed to nitrate-storing foraminifers and/or diatoms are considered. Data compiled from (1) Risgaard-Petersen et al. (2006), (2) Glud et al. (2009), (3) Prokopenko et al. (2011), (4) Larsen et al. (2013), (5) Glock et al. (2013), (6) Høgslund et al. (2010), (7) Marchant et al. (2014), (8) Papaspyrou et al. (2014), (9) Lomstein et al. (1990), (10) García-Robledo et al. (2010), and (11) Heisterkamp et al. (2012). Details on data extraction can be found in Table S1.
In a number of coastal sediments, the total ICNO3 pool has been exclusively assigned to microalgae, in particular to pelagic diatoms that have settled onto the sediment surface (Lomstein et al., 1990) and to benthic diatoms that reside in intertidal sediments (García-Robledo et al., 2010; Heisterkamp et al., 2012; Stief et al., 2013). In natural settings, diatom-associated ICNO3 is diagnosed as a congruent distribution of ICNO3 and fucoxanthin, the marker pigment of diatoms (Stief et al., 2013). In diatom-dominated sediments, the ratio of ICNO3-to-PWNO3 tends to be lower (~5–9) than in foraminifer-dominated sediments (Figure 4; Table S1), but more data need to be collected to confirm this preliminary observation. The pronounced seasonality in intertidal communities of the temperate zone also entails seasonal changes of the diatom-associated ICNO3 pool with high and low values in the cold and warm season, respectively (Stief et al., 2013). It is currently not known whether other nitrate-storing eukaryotes show similar seasonal variation of their ICNO3 contents.
The in situ turnover of the sedimentary ICNO3 pool is indicated by elevated δ15NNO3 and δ18ONO3 values (Prokopenko et al., 2011; Bernhard et al., 2012b). These isotope ratios further increase when isolated foraminifers are incubated under anoxic conditions, which confirms dissimilatory nitrate reduction activity fueled by ICNO3 (Bernhard et al., 2012b). Estimated turnover times of ICNO3 vary between ~12 h and ≥1 month (Risgaard-Petersen et al., 2006; Høgslund, 2008; Glud et al., 2009; Bernhard et al., 2012b), which is considerably slower than the turnover of PWNO3 of only 2–4 h, which is determined in sediments inhabited by foraminifers (Glud et al., 2009; Larsen et al., 2013). This slow turnover of ICNO3 by foraminifers and other nitrate-storing eukaryotes has implications for rate measurements based on 15NO3− incubations (Høgslund, 2008). The labeled and non-labeled nitrate pools may not readily mix within short incubation times, which leads to a significant underestimation of benthic denitrification rates determined with the isotope pairing technique (Nielsen, 1992). Additionally, the unintended release of ICNO3 from eukaryotic cells into the sediment porewater due to crude extraction techniques simulates concentration peaks that might be mistaken for nitrate production zones.
The quantitative role of eukaryote-associated nitrate reduction has only been addressed for foraminifers. The contribution of foraminiferal denitrification to the total loss of combined nitrogen from marine sediments has been estimated for various benthic settings (Høgslund, 2008; Glud et al., 2009; Piña-Ochoa et al., 2010a; Bernhard et al., 2012b; Glock et al., 2013). Apparently, foraminifers may contribute substantially to benthic denitrification and in some environments they even surpass the contribution from prokaryotes (Figure 5).
Figure 5. Foraminiferan denitrification and total denitrification in various benthic marine environments. Data compiled from (1) Høgslund et al. (2008), (2) Glud et al. (2009), (3) Piña-Ochoa et al. (2010a), (4) Bernhard et al. (2012b), (5) Glock et al. (2013), and (6) Larsen et al. (2013).
Foraminiferan denitrification is in general estimated from the in situ abundance of live foraminifers and laboratory-based estimates of denitrification rates for individual species, whereas total denitrification is estimated from 15N-enrichment studies, from porewater profiles of nitrate, or from analyzing the distribution of δ15NNO3 in natural settings (Groffman et al., 2006). This approach involves a high degree of uncertainty as (i) there is only limited information about the diversity of foraminiferal denitrification activity (data from only 11 species are available), (ii) the cell-specific activity is typically measured at conditions far from natural environmental conditions, and (iii) foraminiferal denitrification is typically not included in standard techniques used for measuring total denitrification (see above). Therefore, present reports on foraminiferal contribution to benthic denitrification activity should be considered as preliminary attempts. There is certainly a need for methodologies that capture the in situ denitrification activity mediated by eukaryotes vs. prokaryotes.
The last decade of research has shown that the eukaryotes have their evolutionary roots in a largely anoxic ocean (Anbar and Knoll, 2002; Martin et al., 2003). Pronounced atmospheric oxygenation occurred around 2.4 to 2.1 billion years ago as a consequence of oxygenic photosynthesis, and is named the Great Oxidation Event (GOE). For a few years, it has been debated whether the first production and local accumulation of oxygen might already have happened 2.7–3.2 billion years ago (Brocks et al., 1999; Lyons et al., 2014; Satkoski et al., 2015). Even after the GOE, however, it was not until ~600 million years ago that widespread oxygenation of the deep ocean occurred (Canfield, 1998; Canfield et al., 2008; Lyons et al., 2014). During this transition period of ~1.8 billion years, oxygen was first produced in oceanic microhabitats, which allowed nitrification, the critical aerobic pathway in the nitrogen cycle that produces nitrate, to proceed. Thus, nitrate was present in the oxic microniches of the Proterozoic ocean, and became available for dissimilatory nitrate reducers living on the edge of the nitrate production zones (Fennel et al., 2005; Canfield et al., 2010).
The occurrence of dissimilatory nitrate reduction in very distantly related eukaryotes (Figure 1) raises the question whether the genes involved were present in the single eukaryotic common ancestor that emerged in the largely anoxic Proterozoic ocean, or, alternatively, whether the dissimilatory nitrate reduction pathways we observe today exhibit multiple origins. Neither of the two hypotheses excludes the mitochondrion as the location for eukaryotic nitrate dissimilation. The mitochondrial proteome is a mosaic assemblage of proteins where some are traced to the last mitochondrial ancestor within the Alphaproteobacteria, and some have been acquired from other prokaryotes and eukaryotes through the course of evolution (Gray, 2015).
There is ample evidence that contemporary mitochondria-related organelles are derived from the same ancestral organelle (Mentel and Martin, 2008; van der Giezen, 2011) and the genes involved in dissimilatory nitrate reduction could have been introduced during this event. The suggested ancestor of the mitochondria within the Alphaproteobacteria (Yang et al., 1985; Williams et al., 2007; Gray, 2015) is a representative of a metabolically versatile class that contains facultative anaerobic species capable of running both aerobic respiration and denitrification, e.g., Paracoccus denitrificans, which is discussed as a candidate for the protomitochondrion (John and Whatley, 1975; Gray, 2015). The metabolic blueprint provided by the protomitochondrial Alphaproteobacteria has been extensively modified and today we observe diverse functions and biochemical pathways tied to mitochondria in different eukaryotic lineages (Müller et al., 2012). Investigations of mitochondrial proteomes in nitrate-reducing eukaryotes may show diverse routes of acquisition of the proteins involved in the pathways. This seems to be the case at least for the fungi.
The fungal nitrate respiration is tied to the mitochondrion and details of the denitrification pathway of the soil fungus F. oxysporum have largely been resolved. The conversion of nitrite to nitric oxide is coupled to the mitochondrial electron transport chain and ATP synthesis and involves a copper-containing nitrite reductase, NirK (Kobayashi et al., 1996; Kim et al., 2009; Long et al., 2015). Because the distribution of the eukaryotic NirK gene is systematic and follows the eukaryotic phylogeny, it is suggested that the trait of nitrite reduction evolved from a single ancestor and was carried into the eukaryotic domain in the endosymbiotic event leading to the evolution of the mitochondrion (Kim et al., 2009; Shoun et al., 2012). Apparently though, the fungal reduction of nitric oxide to nitrous oxide has a different origin (Shoun et al., 2012). This reduction step is mediated by a P450 nitric oxide reductase that is found in both the mitochondria and in the cytoplasm, and it seems that this part of the denitrification pathway has been acquired by lateral gene transfer (Kizawa et al., 1991).
Moving from the fungi in the Opisthokonta to the ciliates in the Alveolata, we also find that dissimilatory nitrate reduction by the ciliate Loxodes sp. is presumably affiliated with the mitochondria (Finlay et al., 1983; Finlay, 1985). Gene sequences of this nitrate reductase are, however, not available and it is, at the moment, not possible to draw conclusions about the origin of the trait.
Nitrate accumulation is dispersed throughout the foraminiferan phylogeny including allogromiid species, which likely evolved in the Neoproterozoic anoxic ocean and are considered to form a basal order within the foraminifers (Pawlowski et al., 2003). This might suggest that nitrate accumulation was present in the most recent common ancestor of foraminifers, albeit lost in some lineages. Usage of the intracellular nitrate may have evolved differently among foraminiferal lineages, since data at present show that the allogromiids share their nitrate with endobionts (Bernhard et al., 2012a), whereas some more recently evolved rotaliids invoke it in their own dissimilatory metabolic pathways (Risgaard-Petersen et al., 2006).
Genes for dissimilatory nitrate reduction in diatoms are not known, but diatoms have remarkable genomes with traces of multiple plastid endosymbiotic events allowing migration of genes from the plastid endobiont into the genome (Prihoda et al., 2012). A large number of genes also seem to be derived from bacteria by lateral gene transfer (Bowler et al., 2008; Armbrust, 2009), including genes for nitrite reductases that are targeted at the mitochondria (Allen et al., 2006). This knowledge of the chimeric diatom genomes paves the way for speculations on lateral transfers of genes involved in the dissimilatory use of nitrate, but the examination of such hypotheses requires further sequencing and bioinformatics efforts.
The extensive lack of information on the genes and enzymes driving dissimilatory nitrate reduction among the eukaryotes also means that we cannot close in on the time of its evolutionary origin. It can be established, however, that an anaerobic energy metabolism coupled to nitrate reduction was possible in the environmental settings at the time of origin of ascomycete fungi (Lücking et al., 2009; Prieto and Wedin, 2013) and foraminifers (Pawlowski et al., 2003; Groussin et al., 2011). Diatoms evolved only 250 million years ago (Sims et al., 2006; Armbrust, 2009) and existing hypotheses of diatom origins tend to agree that the pre-diatom or “Ur-diatom” developed in shallow marine (and thus more oxygenated) environments (Sims et al., 2006; Medlin, 2011). It will be interesting to follow the evolutionary path of dissimilatory nitrate reduction among eukaryotes when more sequence data becomes available.
The study of nitrate storage and dissimilatory nitrate reduction by eukaryotic microbes is still in its infancy. Finding nitrate reduction to nitrite by Loxodes sp. and denitrification by fungi were early milestones reached a few decades ago. More recently though, this research area has gained momentum by the discovery that diatoms and foraminifers are also capable of dissimilatory nitrate reduction coupled to intracellular nitrate storage. Future research activities should address open questions regarding the (i) phylogenetic diversity, (ii) physiology and genetics, and (iii) in situ importance of eukaryotic nitrate storage and dissimilatory nitrate reduction.
The known occurrence of nitrate storage and reduction in distant eukaryotic lineages (Figure 1) suggests that these physiological traits are even more widespread among eukaryotes than previously thought. In particular, those lineages for which nitrate storage has already been documented might be “hot candidates” for performing dissimilatory nitrate reduction under anoxic conditions, e.g., gromiids and dinoflagellates (Dortch et al., 1984; Piña-Ochoa et al., 2010a). There is a growing interest in microbial eukaryotes adapted to life under low-oxygen conditions (e.g., Stoeck et al., 2009; Edgcomb et al., 2011; Müller et al., 2012; Bernhard et al., 2014; Parris et al., 2014). An obvious research strategy is, thus, to test known and novel eukaryotic lineages from low-oxygen environments for their ability to store nitrate intracellularly (Step 1) and use nitrate as an alternative electron acceptor (Step 2).
Some fundamentals of the physiology, biochemistry, and genetics of eukaryotic nitrate storage and reduction are still unknown. While 15N labeling experiments have proven intracellular nitrate as an alternative electron acceptor in dissimilatory processes in fungi, foraminifers, and diatoms, there is still a role for intracellular nitrate in assimilation. The exact partitioning of intracellular nitrate between dissimilation and assimilation remains to be investigated in the diverse nitrate-storing and nitrate-reducing eukaryotes. Further unknowns concern the mechanism and energy requirements of nitrate uptake, the intracellular compartment of nitrate storage, and the spectrum of electron donors used for dissimilatory nitrate reduction (e.g., organic vs. inorganic, external vs. storage compounds).
The identification of genes that encode for enzymes involved in dissimilatory nitrate reduction by diatoms and foraminifers is a challenging task that needs particular attention. Knowledge of these genes will not only provide insights into the evolution and biochemistry of dissimilatory nitrate reduction in eukaryotes, but will also enable the development of molecular tools for cultivation-independent investigations directly in the environment. Lessons might be learned from the investigation of denitrifying fungi (Kim et al., 2009; Wei et al., 2015). Several diatom genomes have recently been sequenced, annotated, and interpreted in the context of the cellular nitrogen metabolism (Armbrust et al., 2004; Allen et al., 2006; Bowler et al., 2008). Furthermore, transcriptome sequencing projects of microbial eukaryotes are forthcoming and will provide a rich source of sequence information that can be screened for genes involved in eukaryotic dissimilatory nitrate reduction (Keeling et al., 2014).
The question of dissimilatory nitrate reduction mediated by putative bacterial symbionts in foraminifers (Bernhard et al., 2012a) may also be resolved as soon as eukaryotic and prokaryotic genes for this process can be distinguished. Diatoms and fungi are known to host bacterial endosymbionts too (Foster and Zehr, 2006; Kobayashi and Crouch, 2009), but so far there are no reports on an involvement of these symbionts in dissimilatory nitrate reduction. The relationship between endosymbiotic nitrate reducers and a nitrate-storing eukaryotic host is still enigmatic. It seems paradoxical that a host organism should spend energy to accumulate nitrate intracellularly and then leave it to a bacterial partner without any benefit to itself. In some foraminifers, however, endosymbiotic bacteria are known to use intracellular nitrate for synthesizing amino acids from which the eukaryotic host may benefit (Nomaki et al., 2014, 2015).
The quantitative role of eukaryotic dissimilatory nitrate reduction in the environment is highly uncertain. It has actually been calculated that denitrification by benthic foraminifers equals denitrification by prokaryotes in some marine sediments (Piña-Ochoa et al., 2010a), and calculations are yet to be done for the other eukaryotes. However, the differential measurement of eukaryotic and prokaryotic rates directly in the environment has not yet been achieved due to methodological constraints, and thus novel techniques that capture the uptake and dissimilatory reduction of nitrate in mixed microbial communities in situ need to be developed. Additionally, there is no consensus method available for measuring ICNO3 pools directly in the environment. It should be evaluated whether freeze-thaw cycling, boiling, whole-core squeezing, and centrifugation are equally efficient in lysing nitrate-storing cells in environmental samples. So far, these techniques also lack selectivity in terms of which taxonomic group contributes how much to the total ICNO3 pool.
Given the ubiquitous distribution and high abundance of diatoms, fungi, and foraminifers in marine ecosystems, eukaryotic nitrate storage and dissimilatory nitrate reduction definitely have the potential to contribute significantly to the marine nitrogen cycle. The products of the different pathways of eukaryotic dissimilatory nitrate reduction range from a harmless gas (i.e., dinitrogen) to a strong greenhouse gas (i.e., nitrous oxide) that will escape into the atmosphere. Other products like ammonium or nitrite might be further used by prokaryotes with important roles in nitrogen cycling, such as nitrifiers, denitrifiers, and anammox bacteria. Disentangling the network of nitrogen transformations by eukaryotes and prokaryotes will provide a more comprehensive picture of the marine nitrogen cycle than is currently available.
This review article was conceived, written, and edited by all authors.
This study was financially supported by a grant from the German Research Foundation (Deutsche Forschungsgemeinschaft) awarded to AK (KA3187/2-1), a grant from the Aarhus University Research Foundation awarded to SH (AU Ideas), and the Danish National Research Foundation (NR-P; DNRF104). Tinna Christensen is acknowledged for help with Figures 1, 2, Emma Hammarlund is thanked for critical comments on the manuscript, and Karen ní Mheallaigh is thanked for proofreading. The manuscript was improved by the critical comments of the reviewers.
*. ^note that G. turgida was erroneously named G. pseudospinescens in Risgaard-Petersen et al. (2006); see Piña-Ochoa et al. (2010a).
Arnold, Z. M. (1972). Observations on the biology of the protozoan Gromia oviformis. Univ. Calif. Publ. Zool. 100, 1–168.
Harold, F. M. (1986). The Vital Force: A Study of Bioenergetics. New York, NY: Freeman and Co.
Høgslund, S. (2008). Nitrate Storage as an Adaption to Benthic Life. Ph.D. thesis, University of Aarhus, Aarhus.
Jepps, M. W. (1926). Contribution to the study of Gromia oviforinis Dujardin. Q. J. Microsc. Sci. 70, 701–719.
Kizawa, H., Tomura, D., Oda, M., Fukamizu, A., Hoshino, T., Gotoh, O., et al. (1991). Nucleotide sequence of the unique nitrate/nitrite-inducible cytochrome P-450 cDNA from Fusarium oxysporum. J. Biol. Chem. 266, 10632–10637.
Klionsky, D. J., Herman, P. K., and Emr, S. D. (1990). The fungal vacuole: composition, function, and biogenesis. Microbiol. Rev. 54, 266–292.
Kluyver, A. J., and Donker, H. J. L. (1926). Die Einheit in der Biochemie. Chem. Zelle Gewebe 13, 134–190.
Shoun, H., and Tanimoto, T. (1991). Denitrification by the fungus Fusarium oxysporum and involvement of cytochrome P-450 in the respiratory nitrite reduction. J. Biol. Chem. 266, 11078–11082.
Sweerts, J.-P. R. A., and de Beer, D. (1989). Microelectrode measurements of nitrate gradients in the littoral and profundal sediments of a meso-eutrophic lake (Lake Vechten, The Netherlands). Appl. Environ. Microbiol. 55, 754–757.
Usuda, K., Toritsuka, N., Matsuo, Y., Kim, D. H., and Shoun, H. (1995). Denitrification by the fungus Cylindrocarpon tonkinense: anaerobic cell growth and two isozyme forms of cytochrome P-450nor. Appl. Environ. Microbiol. 61, 883–889.
Copyright © 2015 Kamp, Høgslund, Risgaard-Petersen and Stief. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
0.973271 | First sentence: In the evening, as darkness falls, I return to the fortress.
Premise/plot: This middle grade historical novel is the story of a boy and a bear. The novel is set in the thirteenth century--the setting is first Norway, then the sea, and finally England. The King of Norway is giving a polar bear--a 'pale bear'--to the King of England (Henry III) as a gift. But the bear needs a handler or keeper to get him there safely. Arthur, our young, desperate hero, seems an unlikely choice. But it turns out that he has a way with the bear--a way that the adults don't seem to have. His job--if he accepts it--will be to keep the bear calmed down and willing to eat. In return he'll receive passage to Wales after the bear is delivered safely. Wales is where his father's family is from originally.
This historical coming-of-age novel is packed with action, adventure, and drama.
My thoughts: I really enjoyed this one so much. I'm not sure at this point if it's "really, really like" or love. But I do know that I was so captivated by this story that I could not put the book down. I read it in one sitting. I liked the setting. I liked the characters. I liked the relationships. I liked the story--it is based loosely on a true story. The King of Norway did give the King Henry III a polar bear for his menagerie. The bear did go for a daily swim every day in the river Thames. |
0.923212 | You would not want to miss this once in a lifetime opportunity to know how to catch mullet, would you? Let me tell you that there are different ways to catch a mullet, whether it’s through casting a net, placing a container on the ground sea, or throwing your rod. It also depends on where and when you decide to catch a mullet. Some people catch mullets for eating or to make them into chatterbaits, which will be used to catch another kind of fish.
Mullet is also called “jumping” or “happy fish” because they love skipping and hopping on the water surface. Once you have placed them in a basin, they would also do this. You also have a choice whether you’ll free them once you have caught them. Some people consider fishing as a hobby that often they don’t really eat the mullet but let them free again after catching one.
They are easy to be caught since they have a huge population. It’s not something you get to fish once in a blue moon because literally they are almost in every body of water. This makes mullets easier to find.
One of the first few things we have to remember before we start catching a mullet is the location. Yes, we could go to lakes, rivers and sea but we should be familiar with where the mullets usually reside in the area. There might also be a possibility that you would ride a boat just to look for mullets. In short, you have to give time roaming around the place before deciding where to fish.
Fishing mullets never seemed so easy with a plastic container. The only thing you have to do is to put holes in it, fill it with rocks and breadcrumbs and place it in the water. Voila! After a few minutes, your container is already filled with mullets.
It’s better if it’s transparent but it’s okay if there’s none. Place a hole in the four corners. On the lid of the container, place a small rectangular hole in the middle. There must also be four holes on the lid. The rectangular-shape hole should be cut and sides should be pushed inside. You also have to tape the sides. When you do this, you will allow the mullet to come inside and unable to go out anymore.
It is important to put holes so that the water will pass through once the container is held up. It will also help the mullet to breathe.
Rocks help the container to stay underwater. A couple of rocks would let the box stay on the ground sea until such time a number of mullets enter in the container.
The breadcrumbs are the ones which will attract the mullet to go inside the container. This serves as the chatterbait and at the same time their food. In here, breadcrumbs are not placed on a chatterbait but on a container. It’s different when we use a rod because we should tie the breadcrumbs to the line.
This kind of method won’t take you an hour. It is just easy. You just have to wait happily while mullets are filling up your container.
Casting a net to the water to catch mullet is another helpful and simple method. When using a net, the good thing is that you’ll be able to catch a lot unlike other methods that you could catch one mullet at a time. Net is for when you want to catch a lot once you cast it into the water. I believe when you use a net, you won’t take too much time since it’s possible that you could catch a school of mullet in just one throw. Awesome, isn’t?
If you are not sure the best place for fishing, try to ask a person who is familiar with the place or who is a local. She will help you to determine where you could catch a lot of mullets. Maybe s/he could also share the dos and don’ts in the place.
Good thing for mullets is that sometimes they swim not individually but by group or school.
That is why when you see or hear multiple fishes flapping or even hopping on the surface of the water, then you have reached your spot. Let your eyes follow them until you have cast the net. Moreover, mullets mostly love residing on the shallow part especially if it’s muddy. When you throw your net in those areas, you’ll certainly have a good catch.
To do this, hold the rope and cast it. You must stand steadily.You can throw it on the deep part of the sea especially if you have a boat or a vehicle to be able to get there. But if you are lucky, there are some mullets present in some shallow parts of the sea as well.
If you have thrown it in the shallow part only, you have to pull it back to the shore. When you feel the net has reached the bottom, begin pulling it back. Wrap the rope around your arm while your other hand holds the lead line.
This is the most familiar way of catching a mullet. When you have a fishing pole and a chatterbait, everything will be entirely perfect. It is okay if you use a small line and hook, where you can put the bread or any chatterbait. Best chatterbaits when you are using rod are flies, bread and algae. Fishing mullet can be your bonding with your father, mother, son, daughter, wife, husband and others.
Chatterbaits are needed so that mullets will get attracted when they see and eventually eat them. Mullets eat microscopic algae. They even get to eat and smell the toxins people throw in the sea. Any bait is fine and you are certainly free to try one by one if the first bait doesn’t work and so on.
You have two options. It’s either you throw the pole or wait for mullets to become visible and there you can throw the hook and see which mullet get into the hook.
When the chatterbait is already thrown, there will be a lot of mullets that would compete to cling to the hook. Of course, only one mullet will be the champion.
After catching a lot of mullet with whatever plan you follow, you now have to put the mullets in the basin for gathering.
As you can see if you choose to place a container or throw a rod, it is imperative if we use chatterbaits to catch mullets. If both methods are placed and thrown as is without bread, algae or flies, nothing will happen. But if you place any chatterbaits blades to it, it will attract more and more mullets.
On the other hand, there are times that it’s going to be hard for you to catch a mullet. If you have been waiting for a couple of minutes or an hour, you have to be patient. It’s not easy to wait but life is brighter when you do know how to longer your patience. We should sometimes experience difficulty so that in whatever we do, in this case, catch a fish, we would feel a sense of worthiness.
Some mullets are also good to catch in either low tide or high tide. It actually depends on the place that is the reason why you need to check the place yet before conducting any fishing.
If you think that these are the only methods for catching mullets, think again! There is also an additional way on how to catch a fish which is through a bare hand. If you see a mullet especially on rivers or anywhere shallow in the sea, you can already pick it up through your hand. These could be the mullets that are bigger than the usual ones.
In conclusion, mullet is one of the most caught fishes for human consumption. We catch mullets either for our hobby or food to eat or whatever reason there may be as long as we take good care of the environment.
With all the ways on how to catch a mullet, I hope you have gained and found this article informative and entertaining at the same time. If you think that this has been a worth-read and helpful, please share to spread the information.
In life, we must always seize every moment of every day and one of which is having an interaction with our sea creatures.
I would love to see all your comments and other suggestions in the comment box. So, what are you waiting for? Go and have fun catching some mullets! |
0.949355 | I know Viz's Shonen Jump has an ISSN (1545-7818), but that's from North America.
Do any Japanese manga magazines (e.g. Weekly Shonen Jump, Monthly Shonen Gangan, Dengeki Daioh, Comic Yuri Hime, etc.) have ISSNs?
There are a few places I checked. First, while Wikipedia gives the ISSN for magazines such as Shonen Jump but doesn't give it for magazines such as Weekly Shonen Jump or LaLa.
Lastly, I checked the ISSN International Centre, which has Shonen Jump but can't find Weekly Shonen Jump or LaLa. The reason I didn't mention this first is that a lack of search results is not always the best indicator of something not existing, particularly when there are issues across languages, but I did search in both English and Japanese, so I'm fairly certain this means there just aren't ISSNs for them.
Extrapolating out, it can be assumed that Japanese manga magazines don't have ISSNs. However, they do have 雑誌コード (magazine codes) that are used as identification codes for magazines/journals in Japan. The Wikipedia article is in Japanese but if you translate it, it provides a decent explanation. You can also see the magazine code (09206-06) to the left of the barcode in the magazine below.
kuwaly's answer is correct; in Japan, ISSN is not generally used. They instead use either 雑誌コード (zasshi CODE, magazine/journal code), JAN (Japanese Article Number) code, or 定期刊行物コード (teikikan koubutsu CODE, periodical publication code).
JAN (Japanese Article Number) code is an exclusive code for Japanese publication which is compatible with EAN code. It's always started with 49/45 and is in a format of 13-digit or the shortened 8-digit code.
Unlike other countries, ISSN is not used in the distribution of serial publications in Japan ("magazine code" is the common one), granting of ISSN will take place only after the publisher apply for it. |
0.973052 | Which banks does Hurdlr support?
Hurdlr supports over 9,600 U.S. banks including popular options like Bank of America, Capital One, Chase, Citibank, Wells Fargo, American Express, Navy Federal, USAA and PNC. Hurdlr also supports many small local institutions.
You can add as many credit and debit cards, banks, and sub-accounts with the same bank as you wish. The only limitation is linking accounts held under different login credentials at the same bank, which Hurdlr doesn't currently support. |
0.99991 | How do you add onto a 1906 neighborhood in a protected Historic District? Very carefully. Requirements that had to be met were that the new addition be both “compatible AND differentiated.” The sunroom/library addition is compatible in its boxy form, which echoes the original home, its color, and by being clearly subservient to the two story mass behind. The addition is differentiated by being flat roofed instead of hipped, and by juxtaposing its light and airy structure with the mass of the old home. One third of the addition provides much needed pantry and mudroom space, while 2/3 is a sunroom/library, overlooking a beautifully landscaped yard. |
0.943099 | I will list the match, what happened for each of the categories and my prediction in parenthesis.
1. U.S. titleholder Shelton Benjamin beats R-Truth. I correctly predicted R-Truth being the opponent but was wrong on him winning the match. I'm 1-1. (Also, R-Truth was my opponent choice but he was also my should win choice).
2. Rey Mysterio beats Kane. I correctly predicted the fans picking a no holds barred match and I correctly predicted a Mysterio win. I'm 3-1 (Also, Mysterio was my should win choice. But my match choice of falls count anywhere came up short).
3. ECW Champion Matt Hardy beats Evan Bourne. I correctly predicted Bourne being the opponent. I also correctly predicted Hardy winning the match. I'm 5-1. (Also, Bourne was my choice for opponent. But he was also my should win choice).
4. John Morrison & The Miz vs. Cryme Tyme. Does Cryme Tyme have this many fans? I guess so. I incorrectly predicted Morrison/Miz vs. Cryme Tyme as the match, making the result incorrect as well. I'm 5-3. (I had the same choices for my match and who should win, so those were also wrong). At least Ted DiBiase/Cody Rhodes vs. C.M. Punk/Kofi Kingston came a close second.
5. Intercontinental Champion Santino Marella vs. the Honky Tonk Man. I was incorrect with "Rowdy" Roddy Piper being the opponent choice, so I also missed the result since Honky Tonk Man won by disqualification. I'm 5-5. (My vote for the Honky Tonk Man to be the opponent was correct, but my should win of Marella was not).
6. Undertaker beats Big Show. I was correct with my last man standing match choice, as I was with my Undertaker choice for winner. I'm 7-5. (My choice for knockout match was last in the voting, but I also had Undertaker for my should win choice).
7. Divas Halloween Costume Contest. My choice of Mickie James to win was correct. She is definitely the favorite for WWE fans. I'm 8-5. (Reportedly, my should win choice had the costume credentials for victory).
8. WWE Champion Triple H beats Jeff Hardy. My match choice of Jeff Hardy was correct, as was my match winner of Triple H. I'm 10-5. (My choice for a triple-threat match obviously was incorrect, but my should win choice of HHH was also obviously correct).
9. Batista beats Chris Jericho to win the World Heavyweight Championship. Wow. I didn't expect this at all. My choice of "Stone Cold" Steve Austin was a big winner, but I missed out with my Jericho pick. I finish 11-6. (I had the same predictions for referee and match winner, so I missed those). |
0.999981 | The sooner the US Federal government can enact something vaguely sensible in terms of cannabis legislation, the UN can then start repealing some of their regulations and codes . Which in turn will allow countries like Benin to allocate law enforcement resources to areas it is actually needed.
Also, dare we suggest, the stuff obviously grows well in Benin . Imagine starting the long long road to a regulated market now rather than in 20 years time.
The State Commander of NDLEA, Mr. Buba Wakawa, disclosed that the seizures were made based on intelligence reports received by the Command.
According to Wakawa, the exhibits at the Egor warehouse were hidden inside the ceiling of a bungalow, where a 64-year-old grandfather, Daniel Idemudia who resides in the warehouse was arrested.
“We were shocked to discover 280 sacks of dried weeds suspected to be cannabis sativa all hidden inside the ceiling of the building. This discovery will serve as a warning to others that their warehouses will be discovered and the drugs seized,” he said.
The NDLEA Commander who promised to arrest other members connected with the warehouses, appealed to members of the public with useful information that could lead to the recovery of drugs and arrest of traffickers, to provide such information to the Agency.
Article: E.U. Regulation Will Revolutionize Global Data Privacy. How Will This Affect The Regulated Cannabis Sector?
Connecticut: Informational Hearing To Be Held At Legislative Offices Tuesday. |
0.938082 | Tips for Website Maintenance Regular maintenance of websites helps businesses to keep their content relevant at all times for their clients to view. Doing so will help to ensure that the website is functioning smoothly and its accuracy is enhanced as well. Maintaining your website involves practices such as correcting all the links on your site that are defective and uploading better images in order to attract more clients. One should assess the site in order to ensure that your business strategies are well projected to the audience. In order to keep your site relevant among clients, one should ensure they review it at least every once a month. During website maintenance, one should check for very important points that will make or break your website. One of these points is to keep your website updated at all times especially when posting events or announcements that are date sensitive and deadlines have already passed. There are functional elements such as databases, contact forms as well as ecommerce that should be checked out frequently in order for them to work effectively. Some websites have external links to other sites which should be working properly and the location it is linked to is correct. Website maintenance require a business to regularly check their structural elements on their web pages to ensure that they work accordingly and that the images displayed are projected in the correct way. Keeping up with current technologies during website maintenance is a very important thing for businesses to do when maintaining their sites. Businesses that have fresh sites are those which use the latest technologies to update their websites from time to time. If at all you don’t have the necessary skills in web maintenance, one can call in a web developer who will ensure your site functions properly by updating it. What clients do before they opt to buy any products online is to visit a couple of websites and comparing their services. Businesses with bad images usually have content that is boring and old. Most clients go for those online businesses that have fresh websites with relevant and attractive content.
If at all the content of your website no longer relates to your business, one can always build a new site in place of the other one. Having a positive online presence can be achieved simply by maintaining your website. It is normal for your web links to get exchanged or become broken over time. In order to avoid this, one should use a link checker in order to ensure that they work as they are supposed to.
Businesses should ensure that they back up their websites on a regular basis. This is the case for businesses that make use of their online interface to make any changes to their websites. Backing up your site will help you access your content in event of it crashing. |
0.999948 | Hi! My name is Alina and I'm a 27 years old boy from Poland.
Lični tekst: Hi! My name is Alina and I'm a 27 years old boy from Poland. |
0.797578 | Officially known as the Hashemite Kingdom of Jordan, Jordan is an Arab nation on the east bank of the River Jordan. The country, which is a constitutional monarchy with a representative government, has a population of around 6.6 million.
In recent years, Jordan's economy has gone through a major shift, moving from an aid-dependent state to becoming one of the region's most robust, competitive and open economies. With limited natural resources, the country has a thriving service sector, and the capital city, Amman, is growing as a commercial and financial centre for the Levant. A highly skilled, educated and literate workforce has been a major driver of economic growth.
Tourism also plays a major role in the economy. Jordan is home to a number of major historical sites which include the world-renowned ancient city of Petra, as well as the coastal attractions of the Dead Sea and the Red Sea.
The ancient city of Petra is a symbol of Jordan. It was carved in stone 2000 years ago. Today, Petra is Jordan's most-visited tourist attraction. |
0.972833 | I have absolutely no money and I never get to eat lunch. Please donate so I can get some food. A dollar goes a long way. |
0.946411 | The initial entry of Islam into South Asia came in the first century after the death of the Prophet Muhammad. The Umayyad caliph in Damascus sent an expedition to Baluchistan and Sindh in 711 led by Muhammad bin Qasim. He captured Sindh and Multan. Three hundred years after his death Sultan Mahmud of Ghazni, the ferocious leader, led a series of raids against Rajput kingdoms and rich Hindu temples, and established a base in Punjab for future incursions. In 1024, the Sultan set out on his last famous expedition to the southern coast of Kathiawar along the Arabian Sea, where he sacked the city of Somnath and its renowned Hindu temple.
Muhammad Ghori invaded India in 1175 A.D. After the conquest of Multan and Punjab, he advanced towards Delhi. The brave Rajput chiefs of northern India headed by Prithvi Raj Chauhan defeated him in the First Battle of Terrain in 1191 A.D. After about a year, Muhammad Ghori came again to avenge his defeat. A furious battle was fought again in Terrain in 1192 A.D. in which the Rajputs were defeated and Prithvi Raj Chauhan was captured and put to death. The Second Battle of Terrain, however, proved to be a decisive battle that laid the foundations of Muslim rule in northern India.
The period between 1206 A.D. and 1526 A.D. in India's history is known as the Delhi Sultanate period. During this period of over three hundred years, five dynasties ruled in Delhi. These were: the Slave dynasty (1206-90), Khilji dynasty (1290-1320), Tughlaq dynasty (1320-1413), Sayyid dynasty (1414-51), and Lodhi dynasty (1451-1526).
The concept of equality in Islam and Muslim traditions reached its climax in the history of South Asia when slaves were raised to the status of Sultan. The Slave Dynasty ruled the Sub-continent for about 84 years. It was the first Muslim dynasty that ruled India. Qutub-ud-din Aibak, a slave of Muhammad Ghori, who became the ruler after the death of his master, founded the Slave Dynasty. He was a great builder who built the majestic 238 feet high stone tower known as Qutub Minar in Delhi.
The next important king of the Slave dynasty was Shams-ud-din Iltutmush, who himself was a slave of Qutub-ud-din Aibak. Iltutmush ruled for around 26 years from 1211 to 1236 and was responsible for setting the Sultanate of Delhi on strong footings. Razia Begum, the capable daughter of Iltutmush, was the first and the only Muslim lady who ever adorned the throne of Delhi. She fought valiantly, but was defeated and killed.
Finally, the youngest son of Iltutmush, Nasir-ud-din Mahmud became Sultan in 1245. Though Mahmud ruled India for around 20 years, but throughout his tenure the main power remained in the hands of Balban, his Prime Minister. On death of Mahmud, Balban directly took over the throne and ruled Delhi. During his rule from 1266 to 1287, Balban consolidated the administrative set up of the empire and completed the work started by Iltutmush.
Following the death of Balban, the Sultanate became weak and there were number of revolts. This was the period when the nobles placed Jalal-ud-din Khilji on the throne. This marked the beginning of Khilji dynasty. The rule of this dynasty started in 1290 A.D. Ala-ud-din Khilji, a nephew of Jalal-ud-din Khilji hatched a conspiracy and got Sultan Jalal-ud-din killed and proclaimed himself as the Sultan in 1296. Ala-ud-din Khilji was the first Muslim ruler whose empire covered almost whole of India up to its extreme south. He fought many battles, conquered Gujarat, Ranthambhor, Chittor, Malwa, and Deccan. During his reign of 20 years, Mongols invaded the country several times but were successfully repulsed. From these invasion Alla-ud-din Khilji learnt the lessons of keeping himself prepared, by fortifying and organizing his armed forces. Alla-ud-din died in 1316 A.D., and with his death, the Khilji dynasty came to an end.
Ghyasuddin Tughlaq, who was the Governor of Punjab during the reign of Ala-ud-din Khilji, ascended the throne in 1320 A.D. and founded the Tughlaq dynasty. He conquered Warrangal and put down a revolt in Bengal. Muhammad-Bin-Tughlaq succeeded his father and extended the kingdom beyond India, into Central Asia. Mongols invaded India during Tughlaq rule, and were defeated this time too.
Muhammad-Bin-Tughlaq first shifted his capital from Delhi to Devagiri in Deccan. However, it had to be shifted back within two years. He inherited a massive empire but lost many of its provinces, more particularly Deccan and Bengal. He died in 1351 A.D. and his cousin, Feroz Tughlaq succeeded him.
Feroz Tughlaq did not contribute much to expand the territories of the empire, which he inherited. He devoted much of his energy to the betterment of the people. After his death in 1388, the Tughlaq dynasty came virtually to an end. Although the Tughlaqs continued to reign till 1412, the invasion of Delhi by Timur in 1398 may be said to mark the end of the Tughlaq empire. |
0.999411 | Today, the gap between customers' expectations and the service they actually receive is huge. Customers expect personalized, consistent, accurate, and timely service. However, many companies don't equip their agents with the right technology to satisfy these demands.
Customers waste too much time repeating themselves to agents. Agents, in turn, struggle to confidently communicate with their customers while navigating between multiple applications and searching various systems for the right answer.
Companies demand that their agents follow good processes, but oftentimes, agents are unable to connect with the right process at the right time. Moreover, companies try to imprint their values on their agents, but can only hope that agents find the right balance between service costs, quality, and other key performance indicators (KPIs) when making decisions.
There's no denying that it's an ineffective system and technology is often to blame. Here are five tips that hold technology more accountable and improve the overall customer service experience.
1. Align service offerings with the brand.
A company's brand can be defined as the customers' perceptions of the company's value proposition. Ikea, for example, does a great job of aligning its service offerings with its brand.
Customers shop at Ikea because they are comfortable serving themselves. Comprehensive Web self-service capabilities are offered in various languages through a chat bot, email support, and limited phone support. Customers are not disappointed with Ikea's lack of white-glove service because it's not Ikea's business model.
All companies should offer communication channels that correspond with their respective brands and customer expectations. Equally important is offering the same service experience across all supported communication channels. In other words, break down silos and create a single knowledge base so that customers receive consistent answers regardless of the channel they're accessing.
2. Design an efficient service experience.
Most agents use dozens — even hundreds — of non-integrated tools and applications to complete their tasks. A better strategy is to bring business process management to customer service offerings.
Companies should establish customer experience flows, which are repeatable business processes designed to display the information that agents need in order to resolve customer issues. The information that is displayed includes insight pertaining directly to the customer query, in addition to integrated intelligence gathered from the agent desktop and back-end systems.
Companies should integrate knowledge sources with experience flows so that agent searches are met with the right information at the right time. This ensures a satisfactory result and compliance with company policies. It is far more efficient than requiring the agent to pick through disparate applications. More important, streamlining these activities helps the agent project an intelligent, confident image.
3. Let agents monitor themselves.
This adaptive technology model — controlled and monitored by service managers — allows agents to monitor how well they are doing against the contact center's benchmarks, and prompts them to escalate issues more quickly. A relatively simple dashboard can be created from open data sources.
4. Make decisions based on concrete evidence.
Service managers should design and model ideal experience flows from the perspective of customers as well as the company. These experience flows should include KPI analyses so that the company can balance the average cost of the customer interaction with the expected result, while taking into account compliance and loyalty.
It's important to think of these KPIs as balanced scorecard metrics that need to be considered in unison when designing a service experience. This is because optimal service experiences cannot be designed with only one metric (e.g., cost) in mind.
Once service interactions are designed and deployed, companies must measure success. Success should not be measured by past personal experiences, success stories, or mimicry of top performers — it must be measured based on concrete evidence. This can be done by creating a set of varied experience flows, measuring the success of each, and then using hard data to evolve service offerings — a practice also known as evidence-based service.
5. Strongly consider service-oriented architecture (SOA).
SOA gives organizations the capacity to easily integrate systems into end-to-end experiences, thereby leveraging prior investments. Instead of being inflexibly hard-wired, SOA ensures that technology systems are loosely integrated together. This flexibility also allows for quick adaptation to changing business needs.
Companies that use SOA can quickly fold in new applications, data sources, and knowledge bases. They can respond more easily to shifting business dynamics and service demands. Once set up, many of the changes to an SOA-based system can be easily implemented by a business user without having to wait for technology support.
Mark Angel ([email protected]) is the chief technology officer of Kana Software. He has worked in the fields of customer service software, knowledge management, and search technology development for more than two decades.
Nexus optimizes live support interactions to resolve more customer tech-support issues. |
0.991144 | I chose to marry a person from another country. We have had many adventures together. Was it easy? Definitely not. Would I trade my experiences to have stayed in my own country? No, I wouldn't.
In any marriage, you have to compromise and find a middle ground to keep your relationship compatible. There are some choices that intercultural couples must make that are very difficult.
Here are a few things that stand out to me as difficult decisions for us and an interracial couple.
I think this is probably the most difficult decision, since in most cases, one or the other will have to sacrifice more.The way you grew up is how you are most comfortable. You usually pattern your own home after what you are used to. But, with two cultures you have to be open to a mixture of styles.
I grew up with my own room, slept in a canopy bed, with pink carpet (pretty much spoiled rotten), and my husband never had his own room, and he slept on a mat on the floor in a one room house with the whole family. So, needless to say, we had a bit of adjusting to do. We lived our first year where I am from, and then made the decision to move to where he is from for three years, but ended up spending a total of 13 years in Tonga, two in Samoa, and 20 in Hawaii.
My husband was not fond of the cold weather in the winters where I grew up, and I was not particularly fond of the hot humid climate in the islands, but I managed to adjust.
When the husband and wife speak different languages, they may have problems expressing their feelings with one another. What might be teasing in one language can be taken as offense in the other one. For example, in my husband's culture, if they say that you are "fat", it is a complement, but to me that was not taken very well. We have had a few problems with this language issue, and have to make ourselves very clear when it comes to our relationship.
Ideally, both languages will be spoken by both the husband and wife, and also taught to the children, but usually one language is preferred over the other. I did my best to learn my husband's language, and our children picked up quite a bit also, must mostly we spoke English (probably because I was the main one who raised the children).
Gotta eat, right? But what the wife likes to eat, and what the husband likes to eat may be totally different, especially if they are from different countries. It would be ideal if you had enough money to pay a cook for each of you, but since that is not practical, there has to be a lot of give and take.
I grew up in a family where we ate mostly meat and potatoes with a side of veggies, and cake or pie for dessert. My husband, on the other hand, grew up by the sea. They ate a lot of seafood, including fish, octopus, mussels, and crab. Also, because he lived close to Fiji, they ate very spicy curries. This was all very foreign to me, but I was willing to give them a try, and am so glad that I did in some cases. I still don't like raw fish very much or octopus, but I am a fan of most cooked fish and I love curry. Also, I was delighted to learn there are many different kinds of bananas, papaya and I especially enjoyed the variety of fruits in the islands.
There are good and bad aspects of all cultures. You need to decide which ones to embrace and which ones to discontinue. Probably my biggest challenge during our first years of marriage was the extended family tradition in his culture.
We had only been married a few months when my husband's brother came to live with us. We have had several of his nieces and nephews live with us over extended periods of time. And, we have had my mother-in-law live with us, too. Being a more private person, I have had to learn how to share my space, food, clothes, kitchen, time, vehicle and money with those that stayed with us. My husband's family do not have "distant relatives". They are all considered close relatives. You have to know where to draw the line, otherwise you may lose yourself. Also, I had to give up my "American dream" because moving to the islands meant earning a much lower income, since my husband didn't become a US citizen until much later on.
My husband was raised with the "hands on" type of discipline growing up. I don't remember ever being spanked (maybe once), so when it came time to raise our children, we had a few disagreements about how to handle them. I would cry as much as the kids when my husband got his belt out, but fortunately he mellowed as each child came along. I am happy that we were able to raise our children mainly in Polynesia. They had an excellent education at an English-speaking school, which was patterned after the British school system. They have all done very well, and each went on to graduate from college. One great thing the children learned where they were raised is respect for older people. That is something that seems to be missing a bit in the youth today.
Our latest dilemma is where we will retire. My husband would love to go back to his homeland and farm year round or lay on the beach. I want to be near my grandchildren, so we had thought of going our separate ways, and meeting up a couple of times a year, but that is still under discussion. We have to work this out together, so I will let you know how it turns out. It definitely takes a strong commitment to our marriage vows for us to live happily ever after together!
We have had so many wonderful experiences in our travels. We decided to live in Hawaii because it is the best of both worlds. I am still in America, but it is located in the South Pacific. The population here is so diverse, and I still have the shopping, medical facilities, and conveniences that I am used to. My husband can garden year round, and go to the beach just like his homeland, so it will be hard to leave.
Thank you Yoridale. I appreciate that! Best wishes and much aloha!
Enjoyed reading your hub and the true points you made.
So right, LianaK. Thanks for the suggestion!
I know that you will both find a way to work it out, but a get away in the islands is not a bad idea! Love ya mom!
Glad you liked it Yoridale. I think there is a growing number of biracial couples that would have some fun stories to tell. Thank you.
I enjoy reading your story. I know some couples there can relate. Thanks for sharing.
Thank you Riverfish24. So glad you found my hub insightful. Best wishes to you and your husband. You can make it work if you keep an open mind and loving heart.
This is a wonderful hub. My husband and I belong to the same country but from different cultures and geogrpahical regions with differences in language, food, upbringing etc. It has just been a few years but I can see how it is going to be and your hub has given so many insights. Thank you, I can learn so much and I see ourselves following a similar path like you two did! Making the best of both worlds and loving and cherishing the differences amongst the commonalities.
I hope that this hub will help someone who is contemplating an intercultural marriage. Thank you for your positive comments Pamela99. Aloha!
Thank you PWalker281. Yes, I thought it would be like moving to Hawaii, but Tonga is quite a bit different from Hawaii in many ways. I really didn't know what I was getting myself into, but perhaps it was better that way.
I appreciate the fact that you shared so openly about the trials and adjustments you both made to have a good marriage. This is an excellent hub which I think would be helpful to other interracial couples. Voted up and interesting.
I had a lot of "cultural" adjusting to do when I moved to Hawaii from Washington DC, where I had lived all my life, so I can definitely see how challenging an interracial and intercultural marriage can be. But, as you and your husband have demonstrated, it can also be an enriching and rewarding experience. The video interview at the end of the hub was excellent.
Voted up, useful, and (very) interesting.
Thanks, Jaggedfrost, for reading to the end. I truly appreciate your comments. Aloha!
I doubt that we could stay apart very long - we enjoy annoying each other too much! Glad you have positive things to say about my husband's race. They are really great, and have many fine qualities. And, yes, they are big boned, so they are doing quite well in athletics. My husband played rugby when we first met. He has scars to prove it. His smile is one of the things that endears him to many. Thanks for your great comments.
Don't worry Lisa HW. It was more of an joke than for real. We couldn't live apart that long after all we have been together. We are leaning towards the mainland, but it would be great to have a get-away in the Pacific to run away from the snow to.
Thank you writtenbylv. Glad you liked it. Aloha!
I, too, was disappointed to see you may live apart much of the time. It seems a damp squib after what you have done and created together.
I love Tongan people. Many years ago, we had Queen Salote here for the Coronation of our queen. Salote's smile lit up the whole of rainy Britain and we all loved her to death.
Now we have some Tongan rugby players who catch lesser mortals; throw them in the air and forget to catch them!
Their own side is great, too.
Your husband is a handsome man with one of those great smiles.
I hope whatever you decide, Elayne, it brings you happiness as you age: not a good time to be alone.
elayne101, I was sorry to get to the end of your Hub and see what your latest dilemma is. Best wishes in finding a way to work it out in a way that you're sure is the right way for you.
I like this article, it was very informative. Thanks! |
0.935223 | Could the French hinder Draft N in Europe?
The European Union has special rules for 5GHz WiFi. It looks like France is tinkering with them.
A dispute over spectrum regulations could hamper the arrival of fast WiFi in Europe. New regulations have been created which affect 100 Mbit/s Draft N - but the French are insisting those regulations are introduced in a way that will cause problems with existing WiFi equipment.
Anyone with a long memory for WiFi will feel that, if a spanner is thrown in the WiFi works, the chances are the hand holding it is French. In 2002, French regulators delayed approval of the 802.11a specification in Europe, and the current issue is in some ways a continuation of that problem.
In 2002, the main WiFi standard was 802.11b, operating in the licence-exempt-2.4GHz band, at a symbol rate of 11 Mbit/s. To offer faster speeds, vendors moved to the 5GHz band where a greater number of channels allowed them to create 802.11a, operating at 54 Mbit/s.
In Europe, however, that band included frequencies used by military radar; the European regulator, ETSI, would not allow it until dynamic frequency selection (DFS) and transmit power control (TPC) had been included to prevent interference. DFS allowed a WiFi system to back off from a channel where radar was detected - an early approach towards so-called cognitive radio. ETSI defined a common European test for radar, Euro Norm (EN) 301 893.
The ETSI standards were endorsed by the IEEE, and "harmonized" WiFi was called 802.11h - but since vendors implemented the ETSI tweaks in all their 5GHz kit, the term was rarely used as all 802.11a kit met the 802.11h specifications (read a white paper on the subject, and the EC Decision 2005/513/EC).
Unfortunately, 802.11a was also hardly used. The 802.11g specification had taken WiFi's 2.4GHz branch to a 54 Mbit/s symbol rate, so few people actively took up 802.11a, despite the appearance of dual-band a/b/g devices.
Since 2004, despite occasional reports that it was improving, and vendor backing, 802.11a has been doggedly uninteresting.
The emerging 802.11n standard looks like bringing 5GHz back to life, as it operates in both bands, as a successor to a/b/g products using MIMO (multiple input, multiple output) technology to create faster speeds. Early products have been single-band, operating only in 2.4GHz, but dual-band products are appearing, such as the Apple Airport Extreme.
Dual-band products will be better - assuming that both client and access point are dual-band - because the 2.4GHz band is becoming increasingly crowded, with WiFi, Bluetooth and leakage from microwave ovens. If 2.4GHz isn't working well, dual-band devices can move to 5Ghz for better throughput.
The 802.11h tweaks may not have achieved much in getting us to use 802.11a, but you would think that at least they have paved the way for 802.11n to move smoothly into 5GHz. Unfortunately, this isn't the case.
Just as WiFi kit is set to move into the 5GHz band, ETSI has brought out a new version of its DFS tweak - known as EN 301 893 version 1.3.1 - published in October 2006.
The new version tightens up the requirements for detecting radar in the 5GHz bands, and vendors were originally given until March 2008 to comply with it.
Over Europe as a whole, regulations in the 5 GHz are supposed to become identical, according to the EC's decision 2005/513/EC. France's regulatory authority, however, decided to demand compliance with the newer version sooner than the rest of Europe, in a decision dated December 13 2005, from ARCEP, the French spectrum regulator.
As a result, equipment complying with 1.2.3 of the DFS specification - essentially all current 5GHz WiFi equipment - cannot legally be sold in France.
a surprise," said Tony Graziano, director of technical and regulatory affairs at European industry group EICTA, in a letter to the EC complaining about the impact of this decision.
Industry groups attempted to broker a compromise, which would bring in the newer specification sooner across Europe, but that was still not soon enough for France, apparently. Demanding compliance earlier makes no sense when wide deployment of 5GHz equipment isn't expected till next year anyway.
"EICTA is of the view that the position from France is in conflict with Community law," says Graziano. The earlier version of EN 301 893 was published in the Official Journal of the European Commission, so there should be no problem using it, he says, and France should remove its specific insistence on the later version.
The problem with the newer version is that current silicon can't support it. Firmware bringing today's chips into line results in false positives, according to Michael Coci, director of technical marketing at Trapeze.
The only practical fix is to disable many of the 11a channels, which ends up reducing the 11a band to around 3 channels, so the equipment ends up with the same performance issues as the 2.4GHz band.
Chips that can handle the new DFS specification, without this problem, will be available next year. It would make more sense to update the requirement at the same time as the silicon is available, so the market can develop this year - especially as the amount of equipment actually sold this year will be too small for the difference in DFS specifications to have any impact on military radar.
It's too early to say whether the issue will cause any trouble outside France, but vendors are lobbying hard to get France into line with the rest of Europe, and avoid troubles with the arrival of N-grade WiFi in Europe. |
0.953164 | Why everyone should learn a second language?
Although learning a new language might not always be (1)………… easy for older adults as it is for younger people or children, nevertheless a number of researches suggest it can help slow (2)………… age-related psychological decline. Following the recent study it has been also revealed that people who speak more than one language actually (3)………… the world differently. (4)………… on our primary language spoken, some of us look at the same set of events but perceive (5)……… oppositely. For example, Russian speakers potentially can (6)………… shades of blue faster than native English speakers, while Japanese speakers group objects by material (7)………… than shape. The study suggests that our language has a profound but unconscious role in the perception of various events. Other, maybe not as scientific reason of becoming bilingual is that we can set the way for salary increase and open up tons of amazing job opportunities that would be far beyond (8)………… for an individual who only knows one language. |
0.971572 | If its fuselage, tail, and engine nacelles contribute nothing to an aircraft's lift, why not get rid of them?
Designers pursued the all-wing dream from the first decade of powered flight, notably Jack Northrop in the U.S. and the Horten brothers in Germany. Reimar and Walter Horten were a step ahead, testing an all-wing sailplane in 1933, a twin-engined pusher in 1937, and a turbojet fighter-bomber in 1944. When the war ended, Reimar was working on a six-engine "Amerika Bomber" to carry a hypothetical atomic bomb to New York City.
Postwar, the western Allies dismissed their work, though the British toyed with a transport version of the Amerika Bomber. Walter stayed in Germany and eventually rejoined the Luftwaffe; Reimar went to Argentina and worked for the Peron government.
Meanwhile, Jack Northrop was still trying to build a successful all-wing turbojet bomber in the 1950s. That he never hired the Hortens, who as German engineers were recruited for the U.S. space program, may have been one of history's great missed opportunities.
In the end, all that came from their work was a dozen aircraft whose beauty still astonishes. This is especially true of the Ho 229 fighter-bomber, a bat like warplane that would not look out of place at a 21st-century air show--or combat airfield.
Ho I - 1931 - a flying-wing sailplane.
Ho II - 1934 - initially a glider, it fitted with a pusher propeller in 1935. Looked very like Northrop's flying wings.
Ho III - 1938 - a metal-frame glider, later fitted with a folding-blade [folded while gliding] propeller for powered flight.
Ho IV - 1941 - a high-aspect-ratio glider [looking very like a modern sailplane, but without a long tail or nose].
Ho V - 1937-42 - first Horten plane designed to be powered, built partially from plastics, and powered by two pusher propellers.
Ho VI "Flying Parabola" - an extremely-high-aspect-ratio test- only glider. [After the war, the Ho VI was shipped to Northrop for analysis].
Ho VII - 1945 - considered the most flyable of the powered Ho series by the Horten Brothers, it was built as a flying-wing trainer. [Only one was built and tested, and 18 more were ordered, but the war ended before more than one additional Ho VII could be even partially completed].
Ho VIII - 1945 - a 158-food wingspan, 6-engine plane built as a transport. Never built. However, this design was "reborn" in the 1950's when Reimar Horten built a flying-wing plane for Argentina's Institute Aerotecnico, which flew on December 9, 1960 -- the project was shelved thereafter due to technical problems.
Ho IX - 1944 - the first combat-intended Horten design, a jet powered [Junkers Jumo 004B's], with metal frame and plywood exterior [due to wartime shortages]. First flew in January 1945, but never in combat. When the Allies overran the factory, the almost-completed Ho IX V3 [third in the series - this plane was also known as the "Gotha Go 229"] was shipped back to the Air and Space Museum.
Four aircraft of the Ho IX type were started, designated V.1 to V.4. The V.1 and V.2 were built at Göttigen, designed to carry two BMW 003 jet engines.
V.2 was built with two Juno 004 [jet] engines and had two hours flying before crashing during a single-engine landing. The test pilot, Erwin Ziller, apparently landed short after misjudging his approach.
V.3 was built by Gotha at Friedrichsrodal as a prototype of the senior production version.
V.4 was designed to be a two-man night fighter, with a stretched nose in the fuselage to accommodate the second crewman.
The Horten Ho 229 [often erroneously called Gotha Go 229 due to the identity of the chosen manufacturer of the aircraft] was a late-World War II flying wing fighter aircraft, designed by the Horten brothers and built by the Gothaer Waggonfabrik. It was a personal favourite of Reichsmarschall Hermann Göring, and was the only plane to be able to meet his performance requirements.
In the 1930s the Horten brothers had become interested in the all-wing design as a method of improving the performance of gliders. The all-wing layout removes any "unneeded" surfaces and –in theory at least– leads to the lowest possible drag. For a glider low drag is very important, with a more conventional layout you have to go to extremes to reduce drag and you will end up with long and more fragile wings. If you can get the same performance with a wing-only configuration, you end up with a similarly performing glider with wings that are shorter and thus sturdier.
Years later, in 1943 Reichsmarschall Göring issued a request for design proposals to produce a bomber that was capable of carrying a 1000 kg load over 1000 km at 1000 km/h; the so called 1000/1000/1000 rule. Conventional German bombers could reach Allied command centers in England, but were suffering devastating losses, as allied fighter planes were faster than the German bombers. At the time there was simply no way to meet these goals; the new Jumo 004B jet engines could give the speed that was required, but swallowed fuel at such a rate that they would never be able to match the range requirement.
The Hortens felt that the low-drag all-wing design could meet all of the goals – by reducing the drag, cruise power could be lowered to the point where the range requirement could be met. They put forward their current private (and jealously guarded) project, the Ho IX, as the basis for the bomber. The Government Air Ministry (Reichsluftfahrtministerium) approved the Horten proposal, but ordered the addition of two 30MM cannon, as they felt the aircraft would also be useful as a fighter due to its estimated top speed being significantly higher than any allied aircraft.
Reichsmarschall Göring believed in the design and ordered the aircraft into production at Gotha as the RLM designation of Ho 229 before it had taken to the air under jet power. Flight testing of the Ho IX/Ho 229 prototypes began in December 1944, and the aircraft proved to be even better than expected. There were a number of minor handling problems but otherwise the performance was outstanding.
Gotha appeared to be somewhat upset about being ordered to build a design from two "unknowns" and made a number of changes to the design, as well as offering up a number of versions for different roles. Several more prototypes, including those for a two-seat 'Nacht-Jäger' night fighter, were under construction when the Gotha plant was overrun by the American troops in April of 1945.
The Gotha factory also was building the radar-equipped Horten Ho IX, a for that time futuristic jet-engine flying wing. Using the knowledge they gathered from the construction of these now named Gotha Go 229 [the other name used for the Horten Ho IX], they made a proposal for a fighter, the Gotha P60. The P60 used nearly the same wing layout as the Go 229. The first proposal, the P60A, used a cockpit with the crew in a prone position laying side-to-side.
The engines of the P60A were placed outside the wing. One on top of the central part, one under the central part. Maybe this was done for better maintenance of the engines.
The second proposal, the Gotha P60B, no longer had the prone pilots. It seems to be that Gotha needed to make a simplified cockpit. Maybe they wanted to speed up development or production. Gotha got approval to start building the P60B-prototype, but work was stopped in favor of the final proposal, the P60C.
The Ho 229 A-0 pre-production aircraft were to be powered by two Junkers Jumo 004B turbojets with 1,962 lbf [8.7 kN] thrust each. The maximum speed was estimated at an excellent 590 mph [950 km/h] at sea level and 607 mph [977 km/h] at 39,370 ft [12,000 m]. Maximum ceiling was to be 52,500 ft [16,000 m], although it is unlikely this could be met. Maximum range was estimated at 1180 miles [1,900 km], and the initial climb rate was to be 4330 ft/min (22 m/s). It was to be armed with two 30 mm MK 108 cannon, and could also carry either two 500 kg bombs, or twenty-four R4M rockets.
It was the only design to come close to meeting the 1000/1000/1000 rule, and that would have remained true even for a number of years after the war. But like many of the late war German designs, the production was started far too late for the plane to have any effect. In this case none saw combat.
The majority of the Ho-229's skin was a carbon-impregnated plywood, which would absorb radar waves. This, along with its shape, would have made the Ho-229 invisible to the crude radar of the day. So it should be given credit for being the first true "Stealth Fighter". The US military initiated "Operation Paperclip" which was an effort by the U.S. Army in the last weeks of the war to capture as much advanced German weapons research as possible, and also to deny that research to advancing Russian troops. A Horton glider and the Ho-229 number V2 were secured and sent to Northrop Aviation in the United States for evaluation, who much later used a flying wing design for the B-2 "Spirit" stealth bomber. During WWII Northrop had been commissioned to develop a large wing-only long-range bomber [XB/YB-35] based on photographs of the Horton's record-setting glider from the 1930's, but their initial designs suffered controllability issues that were not resolved until after the war.
The Northrop XB-35 and YB-35 were experimental heavy bomber aircraft developed by the Northrop Corporation for the United States Army Air Forces during and shortly after World War II. The airplane used the radical and potentially very efficient flying wing design, in which the tail section and fuselage are eliminated and all payload is carried in a thick wing. Only prototype and pre-production aircraft were built , but interestingly, the Horten brothers were helped in their bid for German government support when Northrop patents appeared in US Patent Office's "Official Gazette" on 13 May 1941, and then in the International Aeronautical journal "Interavia" on !8 November 1941.
The Northrop YB-49 was a prototype jet-powered heavy bomber aircraft developed by Northrop Corporation shortly after World War II for service with the U.S. Air Force. The YB-49 featured a flying wing design and was a jet-powered development of the earlier, piston-engined Northrop XB-35 and YB-35. The two YB-49s actually built were both converted YB-35 test aircraft.
The YB-49 never entered production, being passed over in favor of the more conventional Convair B-36 piston-driven design. Design work performed in the development of the YB-35 and YB-49 nonetheless proved to be valuable to Northrop decades later in the eventual development of the B-2 stealth bomber, which entered service in the early 1990s.
The YB-49 and its modern counterpart, the B-2 Spirit, both built by Northrop Grumman, have the same wingspan: 172.0 ft [52.4 m]. Flight test data collected from the original YB-49 test flights was used in the development of the B-2 bomber.
The Ho-229's design employed a thoroughly modern wing shape far ahead of its time. The wing had a twist so that in level flight the wingtips [and thus, the ailerons] were parallel with the ground. The center section was twisted upwards, which deflected air in flight, and provided the majority of its lift. Because of this twist in its shape, If the pilot pulled up too suddenly, the nose would stall [or, lose lift] before the wingtips. This meant that the craft's nose would inherently dip in the beginnings of a stall causing the plane to accelerate downwards, and thus it would naturally avoid a flat spin. A flat spin is difficult to recover from, and many rookie pilots have crashed from this condition. Horten also noticed in wind-tunnel testing that in the beginnings of a stall, most airfoil cross-sections began losing lift on their front and rear edges first. Horten designed an airfoil cross-section that developed most of its lift along the centerline of the wing. Since the center line had high lift and the front and rear edges had low lift, it was called a "Bell-Shaped lift curve". The wings were also swept back at a very modern and optimum angle [his gliders from the 1930's used this sweep long before it became popular] which enhanced its stall-resistance, and also lowered its wind-resistance which helped its top speed. This made the Ho-229 easy to fly and very stall-resistant in all phases of its operation.
The only existing Ho-229 airframe to be preserved was V2, and it is located at the National Air and Space Museum [NASM] in Washington D.C. The airframe V1 crashed during testing, and several partial airframes found on the assembly line were destroyed by U.S. troops to prevent them from being captured by advancing Russian troops.
In 1944 the RLM issued a requirement for an aircraft with a range of 11000 km [6835 miles] and a bomb load of 4000 kg [8818lbs]. This bomber was to be able to fly from Germany to New York City and back without refuelling. Five of Germany's top aircraft companies had submitted designs, but none of them met the range requirements for this Amerika Bomber. Their proposals were redesigned and resubmitted at the second competition, but nothing had changed. The Hortens were not invited to submit a proposal because it was thought that they were only interested in fighter aircraft.
After the Hortens learned of these design failures, they went about designing the XVIIIA Amerika Bomber. During the Christmas 1944 holidays, Reimar and Walter Horten worked on the design specifications for their all-wing bomber. They drew up a rough draft and worked on weight calculations, allowing for fuel, crew, armaments, landing gear and bomb load. Ten variations were eventually worked out, each using a different number of existing turbojets. Several of the designs were to be powered by four or six Heinkel-Hirth He S 011jet engines, and several of the others were designed around eight BMW 003A or eight Junker Jumo 004B turbojets.
The version that the Hortens thought would work best would utilize six Jumo 004B turbojets, which were buried in the fuselage and exhausted over the rear of the aircraft. They were fed by air intakes located in the wing's leading edge. To save weight they thought of using a landing gear that could be jettisoned immediately after takeoff [with the additional help of rocket boosters] and landing on some kind of skid. The Ho XVIII A was to be built mainly of wood and held together with a special carbon based glue. As a result, the huge flying wing should go largely undetected by radar.
The Hortens were told to make a presentation for their Amerika Bomber design on 25 February 1945 in Berlin. The meeting was attended by representatives of the five aircraft companies who originally submitted ideas for the competition. No one challenged their assertion that their flying wing bomber could get the job done. A few days later the Hortens were told to report to Reichsmarshall Göring, who wanted to talk to the brothers personally about their proposed Amerika Bomber. There they were told that they were to work with the Junkers company in building the aircraft.
Several days later Reimar and Walter Horten met with the Junkers engineers, who had also invited some Messerschmitt engineers. Suddenly it seemed that the Horten's design was to be worked on by committee. The Junkers and Messerschmitt engineers were unwilling to go with the design that the Hortens had presented several days earlier. Instead, the committee wanted to place a huge vertical fin and rudder to the rear of the Ho XVIII A. Reimar Horten was angry, as this would add many more man-hours, plus it would create drag and thus reduce the range. The committee also wanted to place the engines beneath the wing, which would create additional drag and reduce the range even further. After two days of discussion, they chose a design that had huge vertical fins, with the cockpit built into the fin's leading edge. Six Jumo 004A jet engines were slung under the wing, three to a nacelle on each side. The bomb bay would be located between the two nacelles, and the tricycle landing gear would also be stored in the same area. The committee would present the final design to the RML and recommended that it be built in the former mining tunnels in the Harz Mountains.
Dissatisfied with the committee designed Ho XVIII A, Reimar Horten redesigned the flying wing Amerika Bomber. The proposed Ho XVIII B had a three man crew which sat upright in a bubble-type canopy near the apex of the wing. There were two fixed main landing gear assemblies with two He S 011 turbojets mounted to each side.
During flight, the tires would be covered by doors to help cut down on air resistance and drag, a nose wheel being considered not necessary. Overall, the aircraft would have weighed about 35 tons fully loaded. Fuel was to be stored in the wing so that no auxiliary fuel tanks would be required. It was estimated that the Ho XVIII B would have a range of 11000 km [6835 miles], a service ceiling of 16 km [52492 feet] and a round-trip endurance of 27 hours.
It was decided that construction was to be done in two bomb-proof hangers near Kala, which had concrete roofs 5.6 meters [18.4 feet] thick. In addition, extra long runways had been constructed so the aircraft could be test flown there too. Work was supposed to start immediately, and the RLM expected the Ho XVIII B to be built by the fall of 1945, which Reimar Horten reported to be impossible. At any rate, Germany surrendered two months later before construction could begin.
In 1943 the all-wing Horten 229 promised spectacular performance and the Luftwaffe [German Air Force] chief, Hermann Göring, allocated half-a-million Reich Marks to the brothers Reimar and Walter Horten to build and fly several prototypes. Numerous technical problems beset this unique design and the only powered example crashed after several test flights but the airplane remains one of the most unusual combat aircraft tested during World War II. Horten used Roman numerals to identify his designs and he followed the German aircraft industry practice of using "Versuch," literally test or experiment, numbers to describe pre-production prototypes built to test and develop a new design into a production airplane. The Horten IX design became the Horten Ho 229 aircraft program after Göring granted the project official status in 1943 and the technical office of the Reichsluftfahrtministerium assigned to it the design number 229. This is also the nomenclature used in official German documents.
The idea for the Horten IX grew first in the mind of Walter Horten when he was serving in the Luftwaffe as a fighter pilot engaged in combat in 1940 during the Battle of Britain. Horten was the technical officer for Jadgeschwader [fighter squadron] 26 stationed in France. The nature of the battle and the tactics employed by the Germans spotlighted the design deficiencies of the Messerschmitt Bf 109, Germany's most advanced fighter airplane at that time. The Luftwaffe pilots had to fly across the English Channel or the North Sea to fulfill their missions of escorting German bombers and attacking British fighters, and Horten watched his unit lose many men over hostile territory at the very limit of the airplane's combat radius. Often after just a few minutes flying in combat, the Germans frequently had to turn back to their bases or run out of fuel and this lack of endurance severely limited their effectiveness. The Messerschmitt was also vulnerable because it had just a single engine. One bullet could puncture almost any part of the cooling system and when this happened, the engine could continue to function for only a few minutes before it overheated and seized up.
Walter Horten came to believe that the Luftwaffe needed a new fighter designed with performance superior to the Supermarine Spitfire, Britain's most advanced fighter. The new airplane required sufficient range to fly to England, loiter for a useful length of time and engage in combat, and then return safely to occupied Europe. He understood that only a twin-engine aircraft could give pilots a reasonable chance of returning with substantial battle damage or even the loss of one engine.
Since 1933, and interrupted only by military service, Walter and Reimar had experimented with all-wing aircraft. With Walter's help, Reimar had used his skills as a mathematician and designer to overcome many of the limitations of this exotic configuration. Walter believed that Reimar could design an all-wing fighter with significantly better combat performance than the Spitfire. The new fighter needed a powerful, robust propulsion system to give the airplane great speed but also one that could absorb damage and continue to function.
The Nazis had begun developing rocket, pulse-jet, and jet turbine configurations by 1940 and Walter's role as squadron technical officer gave him access to information about these advanced programs. He soon concluded that if his brother could design a fighter propelled by two small and powerful engines and unencumbered by a fuselage or tail, very high performance was possible.
At the end of 1940, Walter shared his thoughts on the all-wing fighter with Reimar who fully agreed with his brother's assessment and immediately set to work on the new fighter. Fiercely independent and lacking the proper intellectual credentials, Reimar worked at some distance from the mainstream German aeronautical community. At the start of his career, he was denied access to wind tunnels due to the cost but also because of his young age and lack of education, so he tested his ideas using models and piloted aircraft. By the time the war began, Reimar actually preferred to develop his ideas by building and testing full-size aircraft. The brothers had already successfully flown more than 20 aircraft by 1941 but the new jet wing would be heavier and faster than any previous Horten design. To minimize the risk of experimenting with such an advanced aircraft, Reimar built and tested several interim designs, each one moderately faster, heavier, or more advanced in some significant way than the one before it.
Reimar built the Horten Vb and Vc to evaluate the all-wing layout when powered by twin engines driving pusher propellers. He began in 1941 to consider fitting the Dietrich-Argus pulse jet motor to the Horten V but this engine had drawbacks and in the first month of 1942, Walter gave his brother dimensioned drawings and graphs that charted the performance curves of the new Junkers 004 jet turbine engine [this engine was also fitted to these NASM aircraft: Messerschmitt Me 262, Arado Ar 234, and the Heinkel He 162]. Later that year, Reimar flew a new design called the Horten VII that was similar to the Horten V but larger and equipped with more powerful reciprocating engines. The Horten VI ultra-high performance sailplane also figured into the preliminary aerodynamic design of the jet flying wing after Reimar tested this aircraft with a special center section.
Walter used his personal connections with important officials to keep the idea of the jet wing alive in the early stages of its development. General Ernst Udet, Chief of Luftwaffe Procurement and Supply and head of the Technical Office was the man who protected this idea and followed this idea for the all-wing fighter for almost a year until Udet took his own life in November 1941. At the beginning of 1943, Walter heard Göring complain that Germany was fielding 17 different types of twin-engine military airplanes with similar, and rather mediocre, performance but parts were not interchangeable between any two designs. He decreed that henceforth he would not approve for production another new twin-engine airplane unless it could carry 1,000 kg [2,210 lb] of bombs to a "penetration depth" of 1,000 km [620 miles, penetration depth defined as 1/3 the range] at a speed of 1,000 km/h [620 mph]. Asked to comment, Reimar announced that only a warplane equipped with jet engines had a chance to meet those requirements.
In August Reimar submitted a short summary of an all-wing design that came close to achieving Göring's specifications. He issued the brothers a contract, and then demanded the new aircraft fly in 3 months. Reimar responded that the first Horten IX prototype could fly in six months and Göring accepted this schedule after revealing his desperation to get the new fighter in the air with all possible speed. Reimar believed that he had boosted the Reichsmarschall's confidence in his work after he told him that his all-wing jet bomber was based on data obtained from bona fide flight tests with piloted aircraft.
Official support had now been granted to the first all-wing Horten airplane designed specifically for military applications but the jet bomber that the Horten brothers began to design was much different from the all-wing pure fighter that Walter had envisioned nearly four years earlier as the answer to the Luftwaffe's needs for a long-range interceptor. Hencefourth, the official designation for airplanes based on the Horten IX design changed to Horten Ho 229 suffixed with "Versuch" numbers to designate the various prototypes.
All versions of the Ho 229 resembled each other in overall layout. Reimar swept each half of the wing 32 degrees in an unbroken line from the nose to the start of each wingtip where he turned the leading edge to meet the wing trailing edge in a graceful and gradually tightening curve. There was no fuselage, no vertical or horizontal tail, and with landing gear stowed [the main landing gear was fixed but the nose wheel retracted on the first prototype Ho 229 V1], the upper and lower surface of the wing stretched smooth from wingtip to wingtip, unbroken by any control surface or other protuberance. Horten mounted elevons [control surfaces that combined the actions of elevators and ailerons] to the trailing edge and spoilers at the wingtips for controlling pitch and roll, and he installed drag rudders next to the spoilers to help control the wing about the yaw axis. He also mounted flaps and a speed brake to help slow the wing and control its rate and angle of descent. When not in use, all control surfaces either lay concealed inside the wing or trailed from its aft edge. Parasite or form drag was virtually nonexistent. The only drag this aircraft produced was the inevitable by-product of the wing's lift.
Few aircraft before the Horten 229 or after it have matched the purity and simplicity of its aerodynamic form but whether this achievement would have led to a successful and practical combat aircraft remains an open question.
Building on knowledge gained by flying the Horten V and VII, Reimar designed and built a manned glider called the Horten 229 V1 which test pilot Heinz Schiedhauer first flew 28 February 1944. This aircraft suffered several minor accidents but a number of pilots flew the wing during the following months of testing at Oranienburg and most commented favorably on its performance and handling qualities. Reimar used the experience gained with this glider to design and build the jet-propelled Ho 229 V2.
Wood is an unorthodox material from which to construct a jet aircraft and the Horten brothers preferred aluminum but in addition to the lack of metalworking skills among their team of craftspersons, several factors worked against using the metal to build their first jet-propelled wing.
Reimar's calculations showed that he would need to convert much of the wing's interior volume into space for fuel if he hoped to come close to meeting Göring's requirement for a penetration depth of 1,000 km. Reimar must have lacked either the expertise or the special sealants to manufacture such a 'wet' wing from metal. Whatever the reason, he believed that an aluminum wing was unsuitable for this task. Another factor in Reimar's choice of wood is rather startling: he believed that he needed to keep the wing's radar cross-section as low as possible. "We wished", he said many years later, "to have the [Ho 229] plane that would not reflect [radar signals]", and Horten believed he could meet this requirement more easily with wood than metal. Many questions about this aspect of the Ho 229 design remain unanswered and no test data is available to document Horten's work in this area. The fragmentary information that is currently available comes entirely from anecdotal accounts that have surfaced well after World War II ended.
During the war, the Germans experimented with tailless, flying wing aircraft.
The ones described here were made by the Horten brothers. There were several flying wing designs under development during those years for various purposes but the one in question is the Horton iX.
The Horten ix was a tailless, jet-powered, flying wing fighter. Only three were ever built. The first Horton IXx [V-1] was never given engines and used as a glider for test purposes. The second Horton IX [V-2] was given jets and tested. The third aircraft was actually produced by another company, the Gotha firm who was to mass-produce this aircraft. This one aircraft was given the designation Gotha 229 and was never fully assembled and never flew. It fell into the hands of the Americans while still in pieces.
So, it was only the second Horten iX, the V-2 version that flew at all. In fact, it flew very well. Remember, this was a tailless flying wing that flew before computer avionics made such aircraft possible in the USA. Evidently, the Horten iX was so well thought out that a mere human pilot could fly it. But the Horton IX was somewhat more than just an ordinary fighter aircraft. During flight testing it was noticed that the radar return for this aircraft was almost absent. The Germans got busy with this idea and planned to paint the Horten IX with radar absorbing paint that they had developed for another purpose. The fact is that the Horten ix was the world's first stealth aircraft. Unfortunately, during a landing one of its two engines failed and the one flying Horton IX crashed.
So what do we know about the Horten IX in flight? All we now know about the performance of this legendary aircraft is what Allied technical teams said about it and this is the way it has been reported to us down through history via semi technical aircraft history journals and books.
For instance David Masters, in his book "German Jet Genesis", lists the speed of the Horten IX at 540 mph with a ceiling altitude of 52,490 feet. The same authority reports the Gotha 229 [which never flew] as having a maximum speed of 590 mph at sea level and 640 mph at 21,320 ft. with a ceiling of 51,000 ft. This compares with the Me 262, the operational German jet fighter with which we are familiar, whose top speed Masters lists at 538 mph at 29,560 ft. with no ceiling given.
Surprisingly, both jets were powered by the same two Junkers Jumo 004B-1 engines. Yet the Horten 9 had the cross-section of a knife while the Messerschmitt cross-section was much more typical for an aircraft of the time. How could their performance be nearly identical? How would the Allies know what the performance of the Horten IX actually was since they never got their hands on a working example?
Perhaps it was extrapolation. Perhaps they simply wanted to under value this sleek German jet simply because it looked so advanced for its time and there was nothing comparable in the Allied arsenal. Without contradictory evidence, the word of the American experts was repeated and became part of history as we know it. The funny thing is that now contradictory evidence has surfaced and has somehow slipped by the American censors.
The document in question is a Memorandum Report, dated 17 March 1945 while the war was still in progress. The "Subject" of this report was data obtained on the German tailless jet propelled fighter and Vereinigte Leichtmetalwerke [United Light-Metal Works].
"To present data of immediate value obtained on C.I.O.S. trip to Bonn on 11 March to 16 March 1945. Travel performed under AG 200m 4-1, SHAEF, dated 9 March 1945".
There was discussion of the Horton IX in which a maximum speed of 1,160 km/hr or about 719 miles per hour was claimed. The informant spoke during the war, while the last example of the Horton IX was still flying. Data about all other German aircraft is correct. Was there an intentional cover up concerning the performance of the Horton IX by the Allies?
This is raw Intelligence to be compiled into a Combined Intelligence Objectives Sub-Committee report by SHAEF personnel.
Under "Factual Data" we learn that their German informant, Mr. F.V. Berger, is a draftsman for the Horten organization during the time that the Hortons were designing the tailless aircraft. To add to Berger's trusted position, he was actually found in the former home of Horten brothers by the Intelligence agents.
Mr. Berger describes the Horten aircraft, models H1 to H12 but most of the discussion centers on the H9, the jet-powered, tailless, flying wing fighter-bomber.
Berger goes on to list the weight, bomb load and cannons used but then states that the maximum speed for the H9, which was still being tested as this report was being written, was 1,160 km/her at an altitude of 6,000 meters.
This last statement must have shocked the SHAEF team to the core. The speed given, 1,160 kilometers per hour works out to slightly over 719 miles per hour!
The Allies weren't even thinking about flying that fast in those days.
The Intelligence team was transfixed by Berger's statement and double-checked his veracity. They asked him about the speed of the Me 262 and the Arado 234. Allied intelligence knew both these operational German aircraft and their capabilities by this time even if they had not gained an example of each aircraft.
Berger gave the speed of the Me 262 at 900 km/hr and the Arado 234 [jet-powered bomber-reconnaissance aircraft] at 800 km/hr. This works out to 558 mph for the Messerschmitt and 496 mph for the Arado. These figures are right on the money and lend credibility to Berger's evaluation of the Horton IX.
Then Berger made another astonishing statement. Berger stated that the Horton IX, loaded with bombs [weighing 2,000 kg.] "would get away from Me262 without bombs".
Berger went on to describe the Horten IX V-1 correctly as being tested as a glider and the Horton IX V-2 as a fully powered aircraft. He goes on to say that the Horten 9 V-2 "is being tested at Oranienbueg now".
So at the time of this interview, the Horten IX V-2 was still flying and had not crashed yet.
The Horten 9/Gotha 229 had a low radar return. The Germans knew this and planned a radar repelling type of paint for it that was already used on submarines. These features would make the Horton iX the first stealth aircraft, a fact much mentioned concerning the history of the American B-2 bomber. In fact, American engineers visited the remaining partially assembled Gotha 229 in Maryland to get ideas for the B-2.
But the speed given by Berger, 719 miles per hour, puts the Horton IX in a class by itself. By this it is meant that this speed and ceiling altitude exceed both the Soviet Mig 15 and the American F86 'Sabre' of Korean War vintage, five or six years later.
If we listen, we can hear echoes of the undervaluation of German aircraft at the highest levels, even within the American aerospace industry. Aircraft legend Howard Hughes owned a captured Me 262. Hughes was a big fan and participant in something called aircraft racing during those post-war years. Towering pylons would mark out a course of several miles in the California desert and aircraft would race around this course. When Hughes' rival company, North American Aviation came out with its F-86 'Sabre', Howard Hughes challenged the US Air Force to a one-on-one match race of their new 'Sabre' jet against his old German Me 262. The Air Force declined. Obviously, there is some unspoken fact behind the Air Force decision.
The 'Sabre' was said to be "trans-sonic" or having a top speed of about 650 mph, with a ceiling of 45,000 ft. If the US Air Force wanted no part of a contest with the real Me 262, not the paper projection, what would they have thought of a head to head match with the Horton iX?
Yet, the Horton IX was a stealth aircraft. The Americans didn't even recognize what a stealth aircraft was until over 30 years later and even then they always wanted to couch the comparison of the Horton IX to the B-2 stealth bomber. That comparison is fallacious and perhaps designed to hide something else.
Let's compare the Horten 9 to the F-117 stealth fighter instead. The F-117 has vertical control surfaces and may be, in fact, less stealthy than the Horton IX. Both had special radar absorbing paint. But the F-117 is not supersonic. The F-117 is generally conceded to fly about 650 mph, about the same as the F-86 'Sabre', while the Horton 9 could be faster at 719 mph. Another difference is that the Horton IX carried two 37 mm cannons while the F-117 has no guns or rockets and so is really not a fighter at all but only a first-strike bomber.
There are other examples but we should both watch for these tactics and recognize that the government is reluctant to fully credit the Germans for their advances during the war and we should recognize that they will go to some lengths to maintain the secrecy status quo. These tactics also include false comparisons and outright deception.
The Flugfunk Forschungsinstitut Oberpfaffenhofen, abbreviated F.F.O. The F.F.O. was a pure research organization specializing in aeronautical radio research for the Luftwaffe. This organization seems to have specialized in work of jamming radar. This was a large organization and had many physical sites of operation. When the war drew to a conclusion, all the secret research done at the F.F.O. was burnt. What we know and what remains consist largely of what was remembered by the individual scientists involved and their private libraries. It is not unreasonable to assume that some secrets went forever unspoken after those ashes cooled, however.
The F.F.O. developed several types of klystrons. A klystron tube is used to produce ultra-high frequencies and was employed to generate frequencies in order to jam radar.
"Measurement on the conductivity and dielectric properties of flame gases are being conducted by Dr. Lutze at Seeshaupt. They are intended to provide knowledge of the effects to be expected with radio control of rockets and to say how much the flame and trail of a V-2 contributes to radar reflections".
So the F.F.O. was measuring the conductivity and radar reflectional properties of exhaust gases? Placing a cathode in the exhaust of a jet or rocket was the Flame Jet Generator of Dr. T.T. Brown. This procedure induced a negative charge to the exhaust. A corresponding positive charge is automatically induced on the wing's leading edge. This combination bends radar signals around the aircraft and is one method used by the B-2 bomber.
The Horton IX, had recessed intake and exhaust ports. It had no vertical control surfaces. Therefore, its radar reflection was already super-low. There is no trouble imagining this aircraft painted with radar absorbing paint as was planned, but how about inducing a radar-bending envelope of charged particles around this aircraft? And how about fitting this aircraft with a klystron tube in its nose pumping out the same frequency used by Allied radar, jamming it or making the aircraft invisible to radar? If we can imagine this, so could the scientists with the F.F.O. If the war had lasted another year, the Allies might have faced not only a 700 mile per hour Horton IX, but a Horton IX which was also a true stealth aircraft in the modern sense.
-- Henry Stevens, "Hitler's Suppressed and Still-Secret Weapons, Science and Technology"
As they developed the 229, the Horten brothers measured the wing's performance against the Messerschmitt Me 262 jet fighter. According to Reimar and Walter, the Me 262 had a much higher wing loading than the Ho 229 and the Messerschmitt required such a long runway for take off that only a few airfields in Germany could accommodate it. The Ho 229 wing loading was considerably lower and this would have allowed it to operate from airfields with shorter runways. Reimar also believed, perhaps naively, that his wing could take off and land from a runway surfaced with grass but the Me 262 could not. If these had been true, a Ho 229 pilot would have had many more airfields from which to fly than his counterpart in the Messerschmitt jet.
Successful test flights in the Ho 229 V1 led to construction of the first powered wing, the Ho 229 V2, but poor communication with the engine manufacturers caused lengthy delays in finishing this aircraft. Horten first selected the 003 jet engine manufactured by BMW but then switched to the Junkers 004 power plants. Reimar built much of the wing center section based on the engine specifications sent by Junkers but when two motors finally arrived and Reimar's team tried to install them, they found the power plants were too large in diameter to fit the space built for them. Months passed while Horten redesigned the wing and the jet finally flew in mid-December 1944.
Full of fuel and ready to fly, the Horten Ho 229 V2 weighed about nine tons and thus it resembled a medium-sized, multi-engine bomber such as the Heinkel He 111. The Horten brothers believed that a military pilot with experience flying heavy multi-engine aircraft was required to safely fly the jet wing and Scheidhauer lacked these skills so Walter brought in veteran Luftwaffe pilot Lt. Erwin Ziller. Sources differ between two and four on the number of flights that Ziller logged but during his final test flight an engine failed and the jet wing crashed, killing Ziller.
According to an eyewitness, Ziller made three passes at an altitude of about 2,000 m [6,560 ft] so that a team from the Rechlin test center could measure his speed using a theodolite measuring instrument. Ziller then approached the airfield to land, lowered his landing grear at about 1,500 m [4,920 ft], and began to fly a wide descending spiral before crashing just beyond the airfield boundary. It was clear to those who examined the wreckage that one engine had failed but the eyewitness saw no control movements or attempt to line up with the runway and he suspected that something had incapacitated Ziller, perhaps fumes from the operating engine. Walter was convinced that the engine failure did not result in uncontrollable yaw and argued that Ziller could have shut down the functioning engine and glided to a survivable crash landing, perhaps even reached the runway and landed without damage.
Walter also believed that someone might have sabotaged the airplane but whatever the cause, he remembered it was an awful event. "All our work was over at this moment". The crash must have disappointed Reimar as well. Ziller's test flights seemed to indicate the potential for great speed, perhaps a maximum of 977 km/h [606 mph]. Although never confirmed, such performance would have helped to answer the Luftwaffe technical experts who criticized the all-wing configuration.
At the time of Ziller's crash, the Reich Air Ministry had scheduled series production of 15-20 machines at the firm Gotha Waggonfabrik Flugzeugbau and the Klemm company had begun preparing to manufacture wing ribs and other parts when the war ended.
Horten had planned to arm the third prototype with cannons but the war ended before this airplane was finished. Unbeknownst to the Horten brothers, Gotha designers substantially altered Horten's original design when they built the V3 airframe. For example, they used a much larger nose wheel compared to the unit fitted to the V2 and Reimar speculated that the planned 1,000 kg [2,200 lb] bomb load may have influenced them but he believed that all of the alterations that they made were unnecessary.
The U.S. VIII Corps of General Patton's Third Army found the Horten 229 prototypes V3 through V6 at Friedrichsroda in April 1945. Horten had designed airframes V4 and V5 as single-seat night fighters and V6 would have become a two-seat night fighter trainer. V3 was 75 percent finished and nearest to completion of the four airframes. Army personnel removed it later and shipped it to the U.S., via the Royal Aircraft Establishment at Farnborough, England. Reports indicate the British displayed the jet during fall 1945 and eventually the incomplete center section arrived at Silver Hill [now the Paul E. Garber Facility in Suitland, Maryland] about 1950.
There is no evidence that the outer wing sections were recovered at Friedrichsroda but members of the 9th Air Force Air Disarmament Division found a pair of wings 121 km [75 miles] from this village and these might be the same pair now included with the Ho 229 V3.
Reimar and Walter Horten demonstrated that a fighter-class all-wing aircraft could successfully fly propelled by jet turbine engines but Ziller's crash and the end of the war prevented them from demonstrating the full potential of the configuration.
The wing was clearly a bold and unusual design of considerable merit, particularly if Reimar actually aimed to design a "Stealth Bomber" but as a tailless fighter-bomber armed with massive 30mm cannon placed wide apart in the center section, the wing would probably have been a poor gun platform and found little favor among fighter pilots.
Walter argued rather strenuously with his brother to place a vertical stabilizer on this airplane.
Like most of the so-called "Nazi wonder weapons" the Horten IX was an interesting concept that was poorly executed.
One of Reimar Horten's projects after the war began was an all-wing transport glider for the invasion of Britain.
Not until August 1941 was Reimar asked to explore the potential of the Nurflügel as a fighting aircraft, and even then his work was largely clandestine, in an authorized operation arranged by his brother in the Luftwaffe.
In 1942 Reimar built an unpowered prototype with a 61-foot span and the designation Ho 9. After some difficulty the airframe was mated with two Junkers Jumo turbojets of the sort developed for the Messerschmitt Me 262. The turbojet was apparently flown successfully in December 1944, and it eventually achieved a speed of nearly 500 mph [800 km/h]. After about two hours of flying time, it was destroyed in a February 1945 crash that killed its test pilot.
Its potential was obvious, however, and the Gotha company promptly readied the turbojet for production as a fighter-bomber with the Air Ministry designation Ho 229. [Because Gotha built it, the turbojet is also called the Go 229].
Supposedly it would fly at 997 km/h [623 mph], which if true meant that it was significantly faster than the Me 262 - let alone the Flying Wings that Northrop was building. Fortunately for the Allies, the Gotha factory and the Ho 229 prototype -the world's first all-wing turbojet- were captured by U.S. forces in April 1945.
Like today's B-2 Stealth bomber [and unlike Jack Northrop's designs], the Go-229 had a comparatively slender airfoil, with the crew and engines housed in dorsal humps, and its jet exhaust was vented onto the top surface of the wing. The first feature made it faster than the stubby Northrop designs; the second made it even harder to detect, as did the fact that wood was extensively used in its construction.
One reason that the Ho 229 never got into production was that Reimar Horten was distracted that winter by another urgent project: The Ho 18 Amerika bomber.
This huge, six-engined Nurflügel was supposed to carry an atomic bomb to New York or Washington, despite the fact that the bomb was mostly theoretical, the engines probably couldn't have lasted the journey, and the plane couldn't possibly have been completed before Germany surrendered.
[At 132 feet, its span was a bit less than that of the Boeing B-29 Superfortress, the largest warplane of World War II, but considerably shorter than the Northrop XB-35 that was in the works from 1941 to 1946].
Several Nurflügels came to the U.S. as war booty, including the center section of the Ho 229.
Four of them are now back in Germany for restoration, with one to remain there when the work is finished, while the other three rejoin the collection of the Air & Space Museum.
A restored Horten sailplane is on display at Planes of Fame in Chino, California, which also owns a Northrop N-9M, a technology demonstrator roughly the size of the Ho 229, but much less sophisticated.
It was nighttime on the Rio Grande, 29 May 1947, and Army scientists, engineers, and technicians at the White Sands Proving Ground in New Mexico were anxiously putting the final touches on their own American secret weapon, called 'Hermes'. The twenty-five-foot-long, three-thousand-pound rocket had originally been named V-2, or Vergeltungswaffe 2, which means "vengeance" in German. But 'Hermes' sounded less spiteful; Hermes being the ancient Greek messenger of the gods.
The actual rocket that now stood on Test Stand 33 had belonged to Adolf Hitler just a little more than two years before. It had come off the same German slave-labor production lines as the rockets that the Third Reich had used to terrorize the people of London, Antwerp, and Paris during the war. The U.S. Army had confiscated nearly two hundred V-2s from inside Peenemünde, Germany's rocket manufacturing plant, and shipped them to White Sands beginning the first month after the war. Under a parallel, even more secret project called "Operation Paperclip" the complete details of which remain classified as of 2011, 118 captured German rocket scientists were given new lives and careers and brought to the missile range. Hundreds of others would follow.
Two of these German scientists were now readying 'Hermes' for its test launch. One, Wernher Von Braun, had invented this rocket, which was the world's first ballistic missile, or flying bomb. And the second scientist, Dr. Ernst Steinhoff, had designed the V-2 rocket's brain. That spring night in 1947, the V-2 lifted up off the pad, rising slowly at first, with von Braun and Steinhoff watching intently. 'Hermes' consumed more than a thousand pounds of rocket fuel in its first 2.5 seconds as it elevated to fifty feet. The next fifty feet were much easier, as were the hundred feet after that. The rocket gained speed, and the laws of physics kicked in: Aything can fly if you make it move fast enough. 'Hermes' was now fully aloft, climbing quickly into the night sky and headed for the upper atmosphere. At least that was the plan. Just a few moments later, the winged missile suddenly and unexpectedly reversed course. Instead of heading north to the uninhabited terrain inside the two-million-square-acre White Sands Proving Ground, the rocket began heading south toward downtown El Paso, Texas.
Dr. Steinhoff was watching the missile's trajectory through a telescope from an observation post one mile south of the launchpad, and having personally designed the V-2 rocket-guidance controls back when he worked for Adolf Hitler, Dr. Steinhoff was the one best equipped to recognize errors in the test. In the event that Steinhoff detected an errant launch, he would notify Army engineers, who would immediately cut the fuel to the rocket's motors via remote control, allowing it to crash safely inside the missile range. But Dr. Steinhoff said nothing as the misguided V-2 arced over El Paso and headed for Mexico. Minutes later, the rocket crash-landed into the Tepeyac Cemetery, three miles south of Juarez, a heavily populated city of 120,000. The violent blast shook virtually every building in El Paso and Juarez, terrifying citizens of both cities, who swamped newspaper offices, police headquarters and radio stations with anxious telephone inquiries. The missile left a crater that was fifty feet wide and twenty-four feet deep. It was a miracle no one was killed.
Army officials rushed to Juarez to smooth over the event while Mexican soldiers were dispatched to guard the crater's rim. The mission, the men, and the rocket were all classified top secret; no one could know specific details about any of this. Investigators silenced Mexican officials by cleaning up the large, bowl-shaped cavity and paying for damages. But back at White Sands, reparations were not so easily made. Allegations of sabotage by the German scientists who were in charge of the top secret project overwhelmed the workload of the Intelligence officers at White Sands. Attitudes toward the former Third Reich scientists who were now working for the United States tended to fall into two distinct categories at the time. There was the let-bygones-be-bygones approach, an attitude summed up by the Army officer in charge of 'Operation Paperclip', Bosquet Wev, who stated that to preoccupy oneself with "picayune details" about German scientists' past actions was "beating a dead Nazi horse". The logic behind this thinking was that a disbanded Third Reich presented no future harm to America but a burgeoning Soviet military certainly did and if the Germans were working for us, they couldn't be working for them.
Others disagreed, including Albert Einstein. Five months before the Juarez crash, Einstein and the newly formed Federation of American Scientists appealed to President Truman: "We hold these individuals to be potentially dangerous¡ Their former eminence as Nazi party members and supporters raises the issue of their fitness to become American citizens and hold key positions in American industrial, scientific and educational institutions". For Einstein, making deals with war criminals was undemocratic as well as dangerous.
While the public debate went on, internal investigations began. And the rocket work at White Sands continued. The German scientists had been testing V-2s there for fourteen months, and while investigations of the Juarez rocket crash were under way, three more missiles fired from Test Stand 33 crash-landed outside the restricted facility: one near Alamogordo, New Mexico, and another near Las Cruces, New Mexico. A third went down outside Juarez, Mexico, again. The German scientists blamed the near tragedies on old V-2 components. Seawater had corroded some of the parts during the original boat trip from Germany. But in top secret written reports, Army Intelligence officers were building a case that would lay blame on the German scientists. The War Department Intelligence unit that kept tabs on the German scientists had designated some of the Germans at the base as "under suspicion of being potential security risks". When not working, the men were confined to a six-acre section of the base. The officers' club was off-limits to all the Germans, including the rocket team's leaders, Steinhoff and von Braun. It was in this atmosphere of failed tests and mistrust that an extra-ordinary event happened, one that, at first glance, seemed totally unrelated to the missile launches.
During the first week of July 1947, U.S. Signal Corps engineers began tracking two objects with remarkable flying capabilities moving across the southwestern United States. What made the aircraft extra-ordinary was that, although they flew in a traditional, forward-moving motion, the craft, whatever they were, began to hover sporadically before continuing to fly on. This kind of technology was beyond any aerodynamic capabilities the U.S. Air Force had in development in the summer of 1947. When multiple sources began reporting the same data, it became clear that the radar wasn't showing phantom returns, or electronic ghosts, but something real. Kirtland Army Air Force Base, just north of the White Sands Proving Ground, tracked the flying craft into its near vicinity. The commanding officer there ordered a decorated World War II pilot named Kenny Chandler into a fighter jet to locate and chase the unidentified flying craft. This fact has never before been disclosed.
Chandler never visually spotted what he'd been sent to look for. But within hours of Chandler's sweep of the skies, one of the flying objects crashed near Roswell, New Mexico. Immediately, the office of the Joint Chiefs of Staff, or JCS, took command and control and recovered the airframe and some propulsion equipment, including the crashed craft's power plant, or energy source. The recovered craft looked nothing like a conventional aircraft. The vehicle had no tail and it had no wings. The fuselage was round, and there was a dome mounted on the top. In secret Army intelligence memos declassified in 1994, it would be referred to as a "flying disc". Most alarming was a fact kept secret until now, inside the disc, there was a very earthly hallmark: Russian writing. Block letters from the Cyrillic alphabet had been stamped, or embossed, in a ring running around the inside of the craft.
In a critical moment, the American military had its worst fears realized. The Russian army must have gotten its hands on German aerospace engineers more capable than Ernst Steinhoff and Wernher von Braun, engineers who must have developed this flying craft years before for the German air force, or Luftwaffe. The Russians simply could not have developed this kind of advanced technology on their own. Russia's stockpile of weapons and its body of scientists had been decimated during the war; the nation had lost more than twenty million people. Most Russian scientists still alive had spent the war in the Gulag. But the Russians, like the Americans, the British, and the French, had pillaged Hitler's best and brightest scientists as war booty, each country taking advantage of them to move forward in the new world. And now, in July of 1947, shockingly, the Soviet supreme leader had somehow managed not only to penetrate U.S. airspace near the Alaskan border, but to fly over several of the most sensitive military installations in the western United States. Stalin had done this with foreign technology that the U.S. Army Air Forces knew nothing about. It was an incursion so brazen, so antithetical to the perception of America's strong national security, which included the military's ability to defend itself against air attack, that upper-echelon Army Intelligence officers swept in and took control of the entire situation. The first thing they did was initiate the withdrawal of the original Roswell Army Air Field press release, the one that stated that a "flying disc" landed on a ranch near Roswell, and then they replaced it with the second press release, the one that said that a weather balloon had crashed, nothing more. The weather balloon story has remained the official cover story ever since.
Of all the historically significant political/military events of the 20th Century, none have had more official explanations than the so called "Roswell Incident". In fact, as of 2011, the United States government has issued four sanctioned explanations: 1) The crash of a flying saucer, 2) the remains of a weather balloon, 3) the remains of a "Project Mogul" balloon, 4) Crash test dummies. Logic alone would dictate that if the government lied about the last three explanations, why should the general public believe the first one?
The fears were legitimate: fears that the Russians had hover-and fly technology, that their flying craft could outfox U.S. radar, and that it could deliver to America a devastating blow. The single most worrisome question facing the Joint Chiefs of Staff at the time was: What if atomic energy propelled the Russian craft? Or worse, what if it dispersed radioactive particles, like a modern-day dirty bomb? In 1947, the United States believed it still had a monopoly on the atomic bomb as a deliverable weapon. But as early as June 1942, Hermann Göring, commander in chief of the Luftwaffe, had been overseeing the Third Reich's research council on nuclear physics as a weapon in its development of an airplane called the "Amerika Bomber", designed to drop a dirty bomb on New York City. Any number of those scientists could be working for the Russians. The Central Intelligence Group, the CIA's institutional predecessor, did not yet know that a spy at Los Alamos National Laboratory, a man named Klaus Fuchs, had stolen bomb blueprints and given them to Stalin. Or that Russia was two years away from testing its own atomic bomb. In the immediate aftermath of the crash, all the Joint Chiefs of Staff had to go on from the Central Intelligence Group was speculation about what atomic technology Russia might have.
For the military, the very fact that New Mexico's airspace had been violated was shocking. This region of the country was the single most sensitive weapons-related domain in all of America. The White Sands Missile Range was home to the nation's classified weapons-delivery systems. The nuclear laboratory up the road, the Los Alamos Laboratory, was where scientists had developed the atomic bomb and where they were now working on nuclear packages with a thousand times the yield. Outside Albuquerque, at a production facility called Sandia Base, assembly-line workers were forging Los Alamos nuclear packages into smaller and smaller bombs. Forty-five miles to the southwest, at the Roswell Army Air Field, the 509th Bomb Wing was the only wing of long-range bombers equipped to carry and drop nuclear bombs.
Things went from complicated to critical at the revelation that there was a second crash site. Paperclip scientists Wernher von Braun and Ernst Steinhoff, still under review over the Juarez rocket crash, were called on for their expertise. Several other Paperclip scientists specializing in aviation medicine were brought in. The evidence of whatever had crashed at and around Roswell, New Mexico, in the first week of July in 1947 was gathered together by a Joint Chiefs of Staff technical services unit and secreted away in a manner so clandestine, it followed security protocols established for transporting uranium in the early days of the Manhattan Project.
The first order of business was to determine where the technology had come from. The Joint Chiefs of Staff tasked an elite group working under the direct orders of G-2 Army intelligence to initiate a top secret project called "Operation Harass". Based on the testimony of America's Paperclip scientists, Army intelligence officers believed that the flying disc was the brainchild of two former Third Reich airplane engineers, named Walter and Reimar Horten, now working for the Russian military. Orders were drawn up. The manhunt was on.
Walter and Reimar Horten were two aerospace engineers whose importance in seminal aircraft projects had somehow been overlooked when America and the Soviet Union were fighting over scientists at the end of the war. The brothers were the inventors of several of Hitler's flying-wing aircraft, including one called the Horten 229 or Horten IX, a wing-shaped, tailless airplane that had been developed at a secret facility in Baden-Baden during the war. From the Paperclip scientists at Wright Field, the Army Intelligence investigators learned that Hitler was rumored to have been developing a faster-flying aircraft that had been designed by the brothers and was shaped like a saucer. Maybe, the Paperclips said, there had been a later-model Horten in the works before Germany surrendered, meaning that even if Stalin didn't have the Horten brothers themselves, he could very likely have gotten control of their blueprints and plans.
The flying disc that crashed at Roswell had technology more advanced than anything the U.S. Army Air Forces had ever seen. Its propulsion techniques were particularly confounding. What made the craft go so fast? How was it so stealthy and how did it trick radar? The disc had appeared on Army radar screens briefly and then suddenly disappeared. The incident at Roswell happened just weeks before the National Security Act, which meant there was no true Central Intelligence Agency to handle the investigation. Instead, hundreds of Counter Intelligence Corps [CIC] officers from the U.S. Army's European command were dispatched across Germany in search of anyone who knew anything about Walter and Reimar Horten. Officers tracked down and interviewed the brothers' relatives, colleagues, professors, and acquaintances with an urgency not seen since Operation ALSOS, in which Allied Forces sought information about Hitler's atomic scientists and nuclear programs during the war.
A records group of more than three hundred pages of Army Intelligence documents reveals many of the details of "Operation Harass". They were declassified in 1994, after a researcher named Timothy Cooper filed a request for documents under the Freedom of Information Act. One memo, called 'Air Intelligence Guide for Alleged Flying Saucer Type Aircraft', detailed for CIC officers the parameters of the flying saucer technology the military was looking for, features which were evidenced in the craft that crashed at Roswell.
Extreme maneuverability and apparent ability to almost hover; a plan form approximating that of an oval or disc with dome shape on the surface; the ability to quickly disappear by high speed or by complete disintegration; the ability to group together very quickly in a tight formation when more than one aircraft are together; evasive motion ability indicating possibility of being manually operated, or possibly, by electronic or remote control.
The Counter Intelligence Corps' official 1947 C1948 manhunt for the Horten brothers reads at times like a spy novel and at times like a wild goose chase. The first real lead in the hunt came from Dr. Adolf Smekal of Frankfurt, who provided CIC with a list of possible informants' names. Agents were told a dizzying array of alleged facts: Reimar was living in secret in East Prussia; Reimar was living in Göttingen, in what had been the British zone; Reimar had been kidnapped "presumably by the Russians" in the latter part of 1946. If you want to know where Reimar is, one informant said, you must first locate Hannah Reitsch, the famous aviatrix who was living in Bad Hauheim. As for Walter, he was working as a consultant for the French; he was last seen in Frankfurt trying to find work with a university there; he was in Dessau; actually, he was in Russia; he was in Luxembourg, or maybe it was France. One German scientist turned informant chided CIC agents. If they really wanted to know where the Horten brothers were, he said, and what they were capable of, then go ask the American Paperclip scientists living at Wright Field.
Neatly typed and intricately detailed summaries of hundreds of interviews with the Horten brothers' colleagues and relatives flooded the CIC. Army Intelligence officers spent months chasing leads, but most information led them back to square one. In the fall of 1947, prospects of locating the brothers seemed grim until November, when CIC agents caught a break. A former Messerschmitt test pilot named Fritz Wendel offered up some firsthand testimony that seemed real. The Horten brothers had indeed been working on a flying saucer-like craft in Heiligenbeil, East Prussia, right after the war, Wendel said. The airplane was ten meters long and shaped like a half-moon. It had no tail. The prototype was designed to be flown by one man lying down flat on his stomach. It reached a ceiling of twelve thousand feet. Wendel drew diagrams of this saucerlike aircraft, as did a second German informant named Professor George, who described a later model Horten as being "very much like a round cake with a large sector cut out" and that had been developed to carry more than one crew member. The later-model Horten could travel higher and faster, up to 1,200 mph. because it was propelled by rockets rather than jet engines. Its cabin was allegedly pressurized for high-altitude flights.
The Americans pressed Fritz Wendel for more. Could it hover? Not that Wendel knew. Did he know if groups could fly tightly together? Wendel said he had no idea. Were "high speed escapement methods" designed into the craft? Wendel wasn't sure. Could the flying disc be remotely controlled? Yes, Wendel said he knew of radio-control experiments being conducted by Siemens and Halske at their electrical factory in Berlin. Army officers asked Wendel if he had heard of any hovering or near-hovering technologies. No. Did Wendel have any idea about the tactical purposes for such an aircraft? Wendel said he had no idea.
The next batch of solid information came from a rocket engineer named Walter Ziegler. During the war, Ziegler had worked at the car manufacturer Bayerische Motoren Werke, or BMW, which served as a front for advanced rocket-science research. There, Ziegler had been on a team tasked with developing advanced fighter jets powered by rockets. Ziegler relayed a chilling tale that gave investigators an important clue. One night, about a year after the war, in September of 1946, four hundred men from his former rocket group at BMW had been invited by Russian military officers to a fancy dinner. The rocket scientists were wined and dined and, after a few hours, taken home. Most were drunk. Several hours later, all four hundred of the men were woken up in the middle of the night by their Russian hosts and told they were going to be taking a trip. Why Ziegler wasn't among them was not made clear. The Germans were told to bring their wives, their children, and whatever else they needed for a long trip. Mistresses and livestock were also fine. This was not a situation to which you could say no, Ziegler explained. The scientists and their families were transported by rail to a small town outside Moscow where they had remained ever since, forced to work on secret military projects in terrible conditions. According to Ziegler, it was at this top secret Russian facility, exact whereabouts unknown, that the German scientists were developing rockets and other advanced technologies under Russian supervision. These were Russia's version of the American Paperclip scientists. It was very possible, Ziegler said, that the Horten brothers had been working for the Russians at the secret facility there.
For nine long months, CIC agents typed up memo after memo relating various theories about where the Horten brothers were, what their flying saucers might have been designed for, and what leads should or should not be pursued. And then, six months into the investigation, on 12 March 1948, along came abrupt news. The Horten brothers had been found. In a memo to the European command of the 970th CIC, Major Earl S. Browning Jr. explained. "The Horten Brothers have been located and interrogated by American Agencies", Browning said. The Russians had likely found the blueprints of the flying wing after all. "It is Walter Horten's opinion that the blueprints of the Horten IX may have been found by Russian troops at the Gotha Railroad Car Factory", the memo read. But a second memo, entitled 'Extracts on Horten', Walter, explained a little more. Former Messerschmitt test pilot Fritz Wendel's information about the Horten brothers' wingless, tailless, saucerlike craft that had room for more than one crew member was confirmed. "Walter Horten's opinion is that sufficient German types of flying wings existed in the developing or designing stages when the Russians occupied Germany, and these types may have enabled the Russians to produce the flying saucer".
There is no mention of Reimar Horten, the second brother, in any of the hundreds of pages of documents released to Timothy Cooper as part of his Freedom of Information Act request, despite the fact that both brothers had been confirmed as located and interrogated. Nor is there any mention of what Reimar Horten did or did not say about the later-model Horten flying discs. But one memo mentioned "the Horten X"
Due to the rapidly deteriorating war conditions in Germany in the last months of WWII, the RLM [Reichs Luftfahrt Ministerium, or German Air Ministry] issued a specification for a fighter project that would use a minimum of strategic materials, be suitable for rapid mass production and have a performance equal to the best piston engined fighters of the time. The 'Volksjäger' [People's Fighter] project, as it became known, was issued on 8 September 1944 to Arado, Blohm & Voss, Fiesler, Focke-Wulf, Junkers, Heinkel, Messerschmitt and Siebel. The new fighter also needed to weigh no more than 2000 kg [4410 lbs], have a maximum speed of 750 km/h [457 mph], a minimum endurance of 30 minutes, a takeoff distance of 500 m [1604 ft], an endurance of at least 30 minutes and it was to use the BMW 003 turbojet.
Although not chosen to submit a design, the Horten Brothers came up with the Ho X that met the specifications laid out by the RLM. Using a similar concept that they had been working on with their Horten IX [Ho 229] flying wing fighter, the Ho X was to be constructed of steel pipes covered with plywood panels in the center section, with the outer sections constructed from two-ply wood beams covered in plywood. The wing featured two sweepbacks, approximately 60 degrees at the nose, tapering into a 43 degree sweepback out to the wingtips. Control was to be provided by combined ailerons and elevators at the wingtips, along with drag surfaces at the wingtips for lateral control. A single BMW 003E jet engine with 900 kp of thrust was housed in the rear of the aircraft, which was fed by two air intakes on either side of the cockpit. One advantage to this design was that different jet engines could be accommodated, such as the Heinkel-Hirth He S 011 with 1300 kp of thrust, which was to be added later after its development was complete. The landing gear was to be of a tricycle arrangement and the pilot sat in a pressurized cockpit in front of the engine compartment. Armament consisted of a single MK 108 30mm cannon [or a single MK 213 30mm cannon] in the nose and two MG 131 13mm machine guns, one in each wing root.
In order to determine the center of gravity on various sweepback angles, scale models with a 3.05 meter [10 feet] wingspan were built. A full-sized glider was also under construction but was not completed before the war's end. Due to the ending of hostilities in 1945, the Horten Ho X was not completed.
Of the competing firms the Heinkel He 162 Volksjäger a single-engine, jet-powered fighter aircraft, designed and built quickly, and made primarily of wood as metals were in very short supply and prioritised for other aircraft, was the winner. The He 162 was the fastest of the first generation of Axis and Allied jets. Other names given to the plane include 'Salamander', which was the codename of its construction program, and 'Spatz' [Sparrow], which was the name given to the plane by Heinkel.
and another referred to "the Horten XIII" . No further details have been provided, and a 2011 Freedom of Information Act request by the author met a dead end.
The Horten Ho XIII B supersonic flying wing fighter was developed from the Ho XIII A glider, which had 60 degree swept-back wings and an underslung nacelle for the pilot. The XIII B was to be powered by a single BMW 003R turbojet/rocket engine. The cockpit was located in the base of a large, sharply swept vertical fin. Like the research XIII A glider, the XIII B also had swept back wings at a 60 degree angle. Projected armament were two MG 213 20mm cannon, and the Ho XIII B was projected to be flying by mid-1946.
On 12 May 1948, the headquarters of European command sent the director of Intelligence at the United States Forces in Austria a puzzling memo. "Walter Horten has admitted his contacts with the Russians", it said. That was the last mention of the Horten brothers in the Army Intelligence's declassified record for "Operation Harass".
Whatever else officially exists on the Horten brothers and their advanced flying saucer continues to be classified as of 2011, and the crash remains from Roswell quickly fell into the blackest regions of government. They would stay at Wright-Patterson Air Force Base for approximately four years. From there, they would quietly be shipped out west to become intertwined with a secret facility out in the middle of the Nevada desert. No one but a handful of people would have any idea they were there.
Lt Col Walker, at the Air Material Command, asked his operatives in the field to discretely track down the Horten brothers and ascertain whether their radical "Flying Wing" designs - developed during WWII - might be responsible for the rash of Flying Saucer sightings in 1947.
1. The Horten brothers, Reimar and Walter, are residing in Göttingen at present. However, both of them are traveling a great deal throughout the Bi-Zone. Walter at present is traveling in Bavaria in search of a suitable place of employment. It is believed that he may have contacted USAFE Head-quarters in Wiesbaden for possible evacuation to the United States under "Paper Clip". Reimer is presently studying advanced mathematics at the university of Bonn, and is about to obtain his doctor's degree. It is believed that when his studies are completed he intends to accept a teaching position at the Institute for Technology [Techniscbe Hochschule] in Braunschweig sometime in February or March 1948.
2. Both brothers are exceedingly peculiar and can be easily classified as eccentric and individualistic. Especially is this so of Reimar. He is the one who developed the theory of the flying wing and subsequently of all the models and aircrafts built by the brothers. Walter, on the other hand is the engineer who tried to put into practice the several somewhat fantastic ideas of his brother. The clash of personalities resulted in a continuous quarrel and friction between the two brothers. Reimar was always developing new ideas which would increase the speed of the aircraft or improve its manoeuvrability; Walter on the other hand was tearing down the fantastic ideas of his brother by practical calculations and considerations.
3. The two men worked together up to and including the "Horten VIII" a flying wing intended to be a fighter plane powered with two Hirt engines [HM-60-R] with a performance of approximately 650 horsepower each. After the "Horten VIII" was finished, one of the usual and frequent quarrels separated the two brothers temporarily. Walter went to work alone on the "Horten IX", which is a fighter plane of the flying wing design, with practically no changes from the model VIII except for the engines. Walter substituted the Hirt engines with BMW Jets of the type TL-004. The plane was made completely of plywood and was furnished with a Messerschmidt ME-109 Landing gear.
The model of this aircraft (Horten IX) was tested extensively in the supersonic wind tunnel [Mach No. 1.0] of the aero-dynamic testing institute [Aerodynamische Versuchsanstalt, located in Göttingen. The tests were conducted in the late summer of 1944 under the personal supervision of Professor Betz, chief of the institute. Betz at that time was approximately sixty years old and next to Prandtel [then seventy-eight years old], was considered to be the best man on aerodynamics in Germany. Betz's attitude toward the flying wing is very conservative to say the least. Basically he is against the design of any flying wing. According to the official reports about the tests, air disturbances were created on the wing tips, resulting in air vacuums, which in turn would prevent the steering mechanism from functioning properly. This seems logical as, of course, neither the ailerons nor the rudders could properly accomplish their function in a partial vacuum created by air disturbances and whirls.
In spite of that, two Horten IX's were built and tried out by a test pilot, Eugen [now living in Gottingen] at Rechlin in the fall of 1944. One of the two planes, piloted by another test pilot, developed trouble with one of the jet engines while the pilot was trying to ascertain the maximum rate of climb. The right jet stopped suddenly, causing the aircraft to go into an immediate spin and subsequent crash in which the pilot was killed. Eugen, however, was more fortunate in putting the other ship through all the necessary paces without the least trouble. He maintains that the maximum speed attained was around 950 km per hour, and that there were no steering difficulties whatsoever, and that the danger of both head and tail spins was no greater that any other conventional aircraft.
After extensive tests, the Horten IX was accepted by the German Air Force as represented by Göring, who ordered immediate mass production. The first order went to Gothaer Waggon Fabrik, located in Gotha [Thuringia] in January 1945. Göring requested that ten planes be built immediately and that the entire factory was to concentrate and be converted to the production of the Horten IX. The firm in question received all the plans and designs of the ship. In spite of this explicit order, production of the Horten IX was never started. The technical manager of the firm, Berthold, immediately upon receipt of the plans, submitted a number of suggestions to improve the aircraft. It is believed that his intention was to eliminate the Horten brothers as inventors and to modify the ship to such an extent that it would be more his brain child than anybody else's. Numerous letters were exchanged from High Command of the German Air Force and Dr. Berthold, which finally were interrupted by the armistice in May 1945. When US troops occupied the town of Gotha, the designs of the Horten IX were kept in hiding and not handed over to American Military authorities. The original designs in possession of the Horten brothers were hidden in a salt mine in Salzdettfurt, but the model tested by Eugen was destroyed in April 1945. The original designs were recovered from Salzdettfurt by British authorities in the summer of 1945.
The Horten brothers, together with Dr. Betz, Eugen and Dr. Stüper [the test pilot of the aerodynamic institute in Gottingen], were invited to go to England in the late summer of 1945 where they remained for approximately ninety days. They were interrogated and questioned about their ideas and were given several problems to work on. However Reimar was very unwilling to cooperate to any extent whatsoever, unless an immediate contract was offered to him and his brother. Walter, on the other hand, not being a theoretician, was unable to comply and Reimar was sufficiently stubborn not to move a finger. Upon their return to Göttingen Walter remained in contact with British authorities and was actually paid a salary by the British between October 1945 and April 1946, as the British contemplated but never did offer him employment. Walter subsequently had a final argument with his brother and the two decided to part. Reimar then went to the university of Bonn to obtain his degree, and Walter organized an engineering office in Göttingen which served as a cover firm to keep him out of trouble with the labor authorities. Walter married Fräulein von der Gröben, an extremely intelligent woman, former chief secretary to Air Force General Udet.
In the spring of 1947 Walter Horten heard about the flying wing design in the United States by Northrop and decided to write Northrop for employment. He was answered in the summer of 1947 by a letter in which Northrop pointed out that he, himself, could not do anything to get him over to the States, but that he would welcome it very much if he could come to the United States and take up employment with the firm. He recommended that Walter should get in touch with USAFE Headquarters in Wiesbaden in order to obtain necessary clearance.
4. As can be seen from the above, most of the Hortens' work took place in Western Germany. According to our source, neither of the brothers ever had any contact with any representative of the Soviet Air Force or any other foreign power. In spite of the fact that Reimar is rather disgusted with the British for not offering him a contract, it is believed very unlikely that he has approached the Soviet authorities in order to sell out to them. The only possible link between the Horten brothers and the Soviet authorities is the fact that a complete set of plans and designs were hidden at the Gothaer Waggon Fabrik and the knowledge of this is known by Dr. Berthold and a number of other engineers. It is possible and likely that either Berthold or any of the others having knowledge of the Horten IX would have sold out to the Soviet authorities for one of a number of reasons. However, this will be checked upon in the future, and it is hoped that contact with the the Gothaer Waggon Fabrik can be established.
All the above mentioned people contacted independently and at different times are very insistent on the fact that to their knowledge and belief no such design ever existed nor was projected by any of the German air research institutions. While they agree that such a design would be highly practical and desirable, they do not know anything about its possible realization now or in the past.
First, some excerpts from: "The Horten Flying Wing in World War II: The History & Development of the Ho 229", by H. P. Dabrowski, translated from the German by David Johnson [Schiffer Military History Vol. 47].
"In February 1945 Heinz Scheidhauer flew the Ho VII to Göttingen. Hydraulic failure prevented him from extending the aircraft's undercarriage, and he was forced to make a belly landing. The resulting damage had not been repaired when, on 7 April 1945, US troops occupied the airfield. The aircraft presumably suffered the same fate as the Ho V and was burned.
"The [Ho IX V1, RLM-Number 8-229] machine was sent to Brandis, where it was to be tested by the military and used for training purposes. It was found there by soldiers of the US 9th Armored Division at the end of the war and was later burned in a 'clearing action'.
"Construction of the Ho IX V3 was nearly complete when the Gotha Works at Friederichsroda were overrun by troops of the American 3rd Army's VII Corps on 14 April 1945. The aircraft was assigned the number T2-490 by the Americans. The aircraft's official RLM designation is uncertain, as it was referred to as the Ho 229 as well as the Go 229. Also found in the destroyed and abandoned works were several other prototypes in various stages of construction, including a two-seat version The V3 was sent to the United States by ship, along with other captured aircraft, and finally ended up in the H.H. "Hap" Arnold collection of the Air Force Technical Museum. The wing aircraft was to have been brought to flying status at Park Ridge, Illinois, but budget cuts in the late forties and early fifties brought these plans to an end. The V3 was handed over to the present-day National Air and Space Museum [NASM] in Washington D.C."
From these excerpts we see that certainly by late April or early May, 1945, the US had not just knowledge but at least semi-functional examples of the Horten flying wing. Ii can be assumed that the US would have wanted these craft back home for study as soon as was practical.
"It is possible within the present U.S. knowledge -provided extensive detailed development is undertaken- to construct a piloted aircraft which has the general description of the object in subparagraph (e) above which would be capable of an approximate range of 700 miles at subsonic speeds".
Why only possible? The Horten flying wing(s) had already been in US possession for two years.
"Any developments in this country along the lines indicated would be extremely expensive, time consuming and at the considerable expense of current projects and therefore, if directed, should be set up independently of existing projects".
Why expensive? The design, prototype and development work had already been completed. Is this a dodge for more money?
"Due consideration must be given the following: The possibility that these objects are of domestic origin - the product of some high security project not known to AC/AS-2 or this command".
How likely is it that the AMC was unaware of the captured Horten flying wing(s)?
"This opinion was arrived at in a conference between personnel from the Air Institute of Technology, Intelligence T-2, Office, Chief of Engineering Division, and the Aircraft, Power Plant and Propeller Laboratories of Engineering Division T-3".
How likely is it that these groups were unaware of the captured Horten flying wing(s)?
(1) The objects are domestic [U.S.] devices.
(2) Objects are foreign, and if so, it would seem most logical to consider that they are from a Soviet source.
"The Soviets possess information on a number of German flying-wing type aircraft, such as the Gotha P60A, Junkers EF-130 long-range jet bomber and the Horten 229 twin-jet fighter, which particularly resembles some of the descriptions of unidentified flying objects".
This report was prepared by the US Air Force's Directorate of Intelligence and the Office of Naval Intelligence and more than a year has passed since Twining's letter.
How is it that these agencies believe that it is the Soviets who have the captured Horten flying wing(s) or just information when, by this time, the US has had them for at least three years? What value would there be in pointing the finger at the Soviets and suggesting that they have aircraft far in advance of our own?
Klass contends that the USAF Directorate of Intelligence and the Office of Naval Intelligence demonstrate no knowledge of a Roswell-related crashed object/disk because there wasn't such an incident. Yet, three years after the fact, these same offices demonstrate no knowledge of the US possession of the Horten flying wing(s).
Klass can't have it both ways - and neither can the rest of us.
If these offices were not aware of the US possession of the Horten flying wing(s) then the so-called UFO cover-up exceeded their need-to-know and began before the Roswell incident.
If these offices were aware of the US possession of the Horten flying wing(s) then why would they not acknowledge such [in a Top Secret document that took 37 years to declassify]?
Reports of an alien spacecraft being struck by lightning and crashing late at night in early July 1947, near Roswell, New Mexico, were the beginnings of the most compelling event in all UFO lore.
Originally reported in the "Fort Worth Star-Telegram" and confirmed by military officials as authentic, the report was later refuted by the military and the crash remains were claimed to be nothing more than a weather balloon.
Horten Parabola in 1945, copied by the U.S. postwar?
A prototype of the Horten Ho 229 made a successful test flight just before Christmas 1944, but by then time was running out for the Nazis and they were never able to perfect the design or produce more than a handful of prototype planes.
However, an engineering team has reconstructed the bomber –albeit one that cannot fly– from blueprints.
It was designed with a greater range and speed than any plane previously built and was the first aircraft to use the stealth technology now deployed by the US in its B-2 bombers.
It has been recognised that Germany's technological expertise during the war was years ahead of the Allies, from the Panzer tanks through to the V-2 rocket.
But, by 1943, the Nazis were keen to develop new weapons as they felt the war was turning against them.
Nazi bombers were suffering badly when faced with the speed and manoeuvrability of the Spitfire.
In 1943 Luftwaffe chief Hermann Göring demanded that designers come up with a bomber that would meet his "1,000, 1,000, 1,000" requirements – one that could carry 1,000kg over 1,000km flying at 1,000km/h.
Two pilot brothers in their thirties, Reimar and Walter Horten, suggested a "flying wing" design which they were sure would meet Göring's specifications.
The centre pod was made from a welded steel tube, and was designed to be powered by a BMW 003 engine.
But the most significant innovation was Reimar Horten's idea to coat it in a mix of charcoal dust and wood glue which he believed would absorb the electromagnetic waves of radar.
They hoped that that, in conjunction with the aircraft's sculpted surfaces, would render it almost invisible to radar detectors.
This was the same method eventually used by the U.S. in its first stealth aircraft in the early 1980s, the F-117A 'Nighthawk'.
Until now, experts had always doubted claims that the Horten could actually function as a stealth aircraft.
But, using the blueprints and the only remaining prototype craft, Northrop-Grumman defence firm built a fullsize replica of a Horten Ho 229, which cost £154,000 and took 2,500 man-hours to construct.
The aircraft is not completely invisible to the type of radar used in the war, but it would have been stealthy enough and fast enough to reach London before Spitfires could be scrambled.
"If the Germans had had time to develop these aircraft, they could well have had an impact," Peter Murton, aviation expert from the Imperial War Museum at Duxford, in Cambridgeshire told the "Daily Mail".
"In theory the flying wing was a very efficient aircraft design which minimised drag.
"It is one of the reasons that it could reach very high speeds in dive and glide and had such an incredibly long range".
The research was filmed for a documentary on the "National Geographic Channel".
In the early 1960s, the prototype jet was transferred to a Smithsonian facility in Maryland that is off-limits to the public. It remains there today.
“There have been no documents released on it, and the public has no access to it,” said Michael Jorgensen, a documentary filmmaker who secured National Geographic Channel backing to assemble a team of Northrop Grumman aeronautical engineers to study the craft and build a full-size replica from original plans. The completed model, which has a 55-foot wingspan, was quietly trucked to San Diego to join the San Diego Air & Space Museum's permanent collection.
The big mystery: Was this a stealth aircraft created more than three decades before modern stealth technology debuted? Could the wedge-shaped jet — almost completely formed of wood — actually evade radar detection? If so, military analysts wonder if the outcome of the war might have been different had the Germans had time to deploy the technology. The prototype craft was successfully tested by the Germans in late 1944.
The reconstruction process was filmed over three months last fall by Jorgensen's Flying Wing Films production company. Film crews followed the model to Northrop Grumman's restricted test site in the Mojave Desert in January, where the craft was mounted five stories high on a rotating pole. Radar was aimed at it from every direction and aerial attacks were simulated.
"It was a chance to be involved in solving a mystery that has baffled aviation historians for a long time," said Jim Hart, a spokesman for Northrop Grumman, which created the B-2 stealth bomber. |
0.926241 | Relocating and finding new providers or keeping existing providers?
I was wondering what everyone is going to do regarding doctor appointments in their new location. I go to a primary care physician, dentist, orthodontist, optometrist, etc., and want to continue receiving care.
Are people planning on just going to their current doctors over breaks, or finding new doctors in the new location?
This title should get you more relevant responses.
I'm planning to get new doctors at my new location (but mainly because I will have no reason to return to the state I'm in now). However, even if this was my hometown, I think I would get new ones. I did the 'keeping same providers' thing in college and it was a hassle trying to schedule certain things over break.
Of course, you can always try to find new providers and if you don't like any that you find keep your old ones.
T4C: thanks...I wasn't quite sure how to word it!
PsychApps: I think I will probably find new ones, because I, too, had trouble scheduling my appt's. Also, it an be pretty difficult when your insurance only allows you to schedule your next app't over one year after your last one.
I kept my same providers up until I moved across the country. When I lived a few hours away I'd schedule appointments and visit my folks while I was in town, but that is much harder when I moved a plane-ride away. I still have my same hair place (close family friends...andn they are awesome), though I only see them a few times a year so I have to cheat on them here and there.
I switched because the insurance plan forced me to. Otherwise I would have had to pay more for out of network physicians.
Maybe I'm kind of neurotic, but I like to lower my anxiety as much as possible when it comes to possible doctor visits. |
0.999781 | Ronald Gasser, the man who shot and killed former New York Jets running back Joe McKnight last Thursday, has been arrested for manslaughter. What's next from a legal standpoint?
Ronald Gasser, the 54-year-old man who shot and killed former New York Jets running back Joe McKnight last Thursday, has been arrested for manslaughter. Gasser and McKnight, 28, were involved in an incident at the intersection of two roads in Terrytown, Louisiana, when Gasser shot an unarmed McKnight in broad daylight.
Conflicting accounts have been offered, and theorized, about how and why the shooting occurred. On Tuesday, Jefferson Parish Sheriff Newell Normand offered a detailed narrative of law enforcement’s assessment of what took place. The sheriff described Gasser and McKnight as engaged in a protracted road rage encounter that concluded with Gasser gunning down McKnight.
According to Sheriff Normand, the encounter likely began when, “at some point [while driving] Mr. McKnight may have cut off Mr. Gasser.” Being cut off apparently enraged Gasser, who then, according to the sheriff, “set out” against McKnight.
The two men then began a hostile and dangerous game of driving while confronting each other. It spanned several streets and involved both men “driving erratically” as they engaged in “verbal altercations, [driving] on each others’ tail, cutting in front of one another, zipping around vehicles and so on and so on.” The encounter would begin its conclusion when Gasser drove his car in front of McKnight at a red light. McKnight then pulled his car around to the right-hand side of Gasser’s car. At that point, Sheriff Normand indicates, Gasser’s car was “hemmed in” due to its placement on the road and surrounding cars, so Gasser had “no avenue for retreat as it relates to his vehicle.” The two men then continued their verbal quarrel with their car windows down.
McKnight then exited his vehicle and walked toward Gasser, as both men yelled at one another. Sheriff Normand declined to answer whether McKnight in any way entered Gasser’s car, but he did observe that McKnight was found “bent over” indicating that McKnight was looking “into [Gasser’s] car to go eye-to-eye with Mr. Gasser.” The sheriff also said McKnight and Gasser “were in a verbal encounter at the vehicle.” At some point while the two men screamed at each other in what appears to be close proximity, Gasser pulled his gun from between his seat and car console and fired three shots at McKnight, killing him.
While McKnight was unarmed, a gun was found in the car he was driving. However, Sheriff Normand revealed that McKnight’s stepfather owned both the car and the gun. According to the sheriff, there is “no evidence to suggest that [McKnight] insinuated” to Gasser that there was a gun in the car.
Gasser was questioned by police officers last Thursday and released last Friday. At the time of Gasser’s release, Sheriff Normand explained that Gasser had apparently fired his weapon while inside of his car. This led some to speculate that Gasser may have acted under the belief that he was defending himself.
As explained on SI.com, Gasser’s release last Friday did not mean he that would not be arrested, and now he has indeed been arrested. During Tuesday’s press conference, Sheriff Normand highlighted the state’s justifiable homicide law, which empowers persons, while in certain situations, to lawfully kill another person. He did so to insist that his office needed to conduct a close examination of the evidence before determining which charge, if any, would be appropriate to levy on Gasser.
As explained below, Gasser could ultimately face the more severe charge of second degree murder, the lesser charge of negligent homicide or no charge at all for McKnight’s death. If he stands trial, Gasser’s defense would highlight Louisiana’s justifiable homicide law and maintain that it allowed him to stand his ground against McKnight.
Manslaughter is a serious offense, but is significantly lower in severity than murder. Under Louisiana law, manslaughter is defined as a homicide “committed in sudden passion or heat of blood immediately caused by provocation sufficient to deprive an average person of his self-control and cool reflection.” Whereas a murder conviction in Louisiana carries at least a life sentence in prison without parole—in fact, a first degree murder conviction can lead to the death penalty—a manslaughter conviction carries a maximum sentence of 40 years in prison. A defendant convicted of manslaughter rather than murder at least has hope that he or she will be released from prison during his or her lifetime.
To prove Gasser committed manslaughter, prosecutors will need to convince jurors that Gasser lost his temper and killed McKnight without justifiable reason. The two men clearly had a heated and protracted conformation prior to the shooting. Prosecutors will need to convince jurors that, due to sudden passion and feeling enraged, Gasser shot at McKnight. Notice that prosecutors do not need to establish that Gasser had planned to kill McKnight—manslaughter only requires showing that Gasser experienced rage and killed McKnight.
The accuracy of eyewitness testimony is always a source of concern. Memories, as we can all attest, are imperfect. Sometimes eyewitnesses remember sequences of events in ways that are different from what actually took place. Sometimes eyewitnesses exaggerate or even outright lie. Sheriff Normand’s remarks last Friday, which portrayed Gasser as shooting at McKnight from inside Gasser’s car, contradict the account offered by one eyewitness. This eyewitness told The Times-Picayune she saw Gasser shoot McKnight and then walk over to a wounded McKnight on the ground and declare, “I told you don’t f--- with me,” only to shoot McKnight again. Both last Friday and on Tuesday, Sheriff Normand categorically rejected this eyewitness account. Last Friday the sheriff stressed, “Mr. Gasser did not stand over Mr. McKnight and fire shots into him.” On Tuesday, Sheriff Normand said this same witness “lied” and the sheriff theorized it was because “some people wanted that [sensationalized] story to be true.” The same witness, Sheriff Normand observed, “told three different stories in a span of an hour,” thereby losing any and all credibility.
• Time of day and weather: The shooting happened at approximately 2:43 p.m. on a mostly sunny day. Gasser, therefore, was more likely to have a decent view of whether McKnight was armed, and of McKnight’s possible intentions, than Gasser would have had later in the day or if it was foggy.
• McKnight’s clothing: If it displays a gunpowder pattern, McKnight’s clothing could shed light on McKnight’s proximity to Gasser at the time of the shooting. The further McKnight was from Gasser, the less of an immediate threat McKnight may have posed to Gasser—and thus the less justification Gasser would have had to shoot McKnight.
• The autopsy report (or reports, plural, if McKnight’s family commissions an independent autopsy): According to authorities, Jefferson Parish Coroner Gerald Cvitanovich found that McKnight suffered three bullet wounds. Multiple bullets fired on an unarmed person could be interpreted as excessive if Gasser’s only objective was to defend himself.
• The casings: It appears the three bullet casings were found inside Gasser’s car, which—unless the casings were moved prior to their discovery—is consistent with Gasser firing at McKnight from within Gasser’s car. Gasser being in his car and presumably protected in it raises questions about why he thought McKnight posed such an imminent threat. On the other hand, as Sheriff Normand revealed, Gasser’s car was “hemmed in” at that time.
• Video or recordings of the shooting: Sheriff Normand told media on Tuesday that there does not appear to be any video or recordings of the incident. He and his staff reached out to over 70 business owners in attempts to retrieve video, but didn’t have any luck. Now five days after the incident, still no video or recording has emerged, which suggests that none probably will. Still, it is possible than an eyewitness recorded the incident on his or her phone and hasn’t yet shared it with law enforcement.
• Crime scene reenactments: On Tuesday, Sheriff Normand indicated that he and his staff have conducted multiple reenactments of the incident between Gasser and McKnight. These reenactments are critical in prosecutions that contain crucial facts that were not visually or audibly recorded. The reenactments utilize scientific methods and expert analysis to try to recreate what took place.
• Text messages and phone calls: If Gasser texted or called anyone after the shooting, or if McKnight communicated with others about his incident with Gasser before Gasser shot him, prosecutors would be able to paint a more complete picture of the events. Similarly, if eyewitnesses texted or called, their “present sense” impressions of the incident taking place would be valuable.
Approximately 33 states, including Louisiana, have adopted stand-your-ground laws. These laws allow a defendant to argue that the use of deadly force was justified in defense against a grave threat or a perceived grave threat. Stand your ground laws vary in important ways by state, but they dispense of any requirement that a person retreat, even when it is safe to do so. In Louisiana, a defendant who was in a car can, if presented with a grave threat, claim justifiable homicide.
Gasser will argue that McKnight posed an immediate threat. According to Sheriff Normand, McKnight exited his car and approached Gasser, whose car had no means to escape. McKnight appeared to have done so while the hostilities between he and Gasser were escalating. McKnight’s proximity to Gasser is crucial. If McKnight had been standing right at Gasser’s window and was trying to get into the car, Gasser would have a stronger defense. If, instead, McKnight had been standing several feet away, he would have posed less of an immediate threat.
Gasser will also stress that not only was there no means for his car to escape, but he had no duty under Louisiana’s justifiable homicide law to retreat. Thus, even if it was possible that Gasser could have exited his car and safely left the scene, he had no legal obligation to do so.
Gasser’s defense will also depend on his impressions of McKnight. We don’t know if Gasser—mistakenly—thought McKnight was armed, though Sheriff Normand says that McKnight did not tell Gasser that there was a gun inside McKnight’s car (owned by McKnight’s stepfather). If Gasser chooses to testify in his own defense, Gasser will need to debunk the prosecution’s theory that he simply lost his temper in the heat of passion and started firing, and instead insist that he justifiably shot McKnight to protect himself.
In a trial, Gasser’s attorney would also highlight that Gasser fully cooperated with law enforcement following the shooting. According to Sheriff Normand, Gasser answered all questions and voluntarily consented to a search of his home despite law enforcement lacking probable cause to conduct that search.
A trial of Gasser would attract attention for different reasons. Whether unlawfully or lawfully, he killed a former NFL player. Race is also an important dynamic. Gasser is white and McKnight was African-American. The shooting of unarmed African-American men has recently generated substantial discussion in the United States. Gasser’s trial would be followed and discussed within this broader context.
Still, should there be a trial, the presiding judge would surely be sensitive to the broader context of the case. Along those lines, the judge would carefully scrutinize potential jurors to ensure they would have an open mind while serving as jurors. It is a possibility that the jurors would be sequestered during the trial. Jurors in the New Orleans trial of Cardell Hayes, who is accused of murdering former New Orleans Saints defensive end Will Smith, have been sequestered. Also, as in the Hayes trial, prosecutors would not need a unanimous verdict to convict Gasser: under Louisiana law, a criminal conviction can be obtained with the support of 10 of the 12 jurors.
Stay tuned on SI.com for key developments in the case against Gasser for McKnight’s death. |
0.957165 | Although you may feel like giving your whole house a complete makeover but where should you begin? Making some small changes to one room can often make a big difference in that area of your home. For example, your kitchen is often a big focal point in the home, and often a great place to start for remodeling. If your kitchen gets a facelift, it can bring a nice, new feel to your home.
It's possible for you to add some class to your kitchen with the addition of a kitchen island. The truth is, if you assemble a kitchen island, it is an economical way to give your kitchen a modern appearance, and will also create a perfect focus of the whole room. There are a number of variables you should consider, when creating a kitchen island.
Wedging a kitchen island into a quite small space may clutter your kitchen and not be appealing despite a beautiful design. Adding a kitchen island into a more spacious kitchen, however, is a great way to make use of the extra square footage in your home.
The extra permanent fixture in your kitchen creates a great addition, not only of counter space, but storage space, and even possibly eating space as well. Other options are appliances installed in the island, or even a sink.Your desired purpose for the kitchen island is a variable that is crucial, as it affects the design.
A kitchen island may be designed in an oblong contour, a curved shape, a square contour, a rectangular shape, or an L-shape. Basically, the contour and layout of the island will depend largely on the space you have available, and also on what you are using it for. There are even ways of designing the island in order for it to accommodate multiple uses.
Your layout for the brand new island in your kitchen should additionally contain having the ideal overhead lighting for your new surface. Ambient lighting, for example, might add drama to the kitchen, but it is not enough for cooking, stirring, or chopping.
It's essential to know about all the variables you should think about before you start the task of hiring a company to design and build a kitchen island for you. If you are ready to start this process, Kresge Contracting, Inc., of Columbus, Ohio, will be happy to assist you in all your construction needs. |
0.999634 | Propose a Project for a new Agile & Lean software development course at The University of Auckland.
Do you a neat project idea that requires programming but do not have the time or expertise to develop it yourself? The University of Auckland Software Engineering students can help!
I’m pleased to announce the launch of a new 700-level course in Software Engineering at The University of Auckland.
SoftEng761 is a project-based course focusing on teamwork, customer collaboration, and core software engineering practices. It is designed to allow students to gain practical experience in using Agile and Lean software development methods to develop software prototypes in collaboration with customers (industry representatives.) Typical students are expected to be final year Bachelor of Engineering students and Master of Engineering studies students. This provides an excellent opportunity for you to explore a prototype of a new software application or extend/refresh an existing system. Working with our final year BE and ME studies students will also help raise your visibility as potential employers for our future graduates.
Please feel free to forward this post on to friends and colleagues who may be interested. |
0.97011 | The goal is to reach the summit of a formation or the endpoint of a pre-defined route without falling.
Rock climbing competitions have objectives of completing the route in the quickest possible time or the farthest along an increasingly difficult route.
Scrambling, another activity involving the scaling of hills and similar formations, is similar to rock climbing.
However, rock climbing is generally differentiated by its sustained use of hands to support the climber's weight as well as to provide balance.
Rock climbing is a physically and mentally demanding sport, one that often tests a climber's strength, endurance, agility and balance along with mental control. |
0.96686 | If we are going to model Gods love and commitment in marriage, we must ask ourselves, Are we truly willing to be committed to our..
"How am I glutted with conceit of this! The "Prologue" in "Act 1" provides the background for. In the passage we learn that his time has..
of Verdun, so the main attackers were British. The British troops on the Somme comprised a mixture of the remains of the pre-war regular army ; the Territorial Force ; and Kitchener's Army, a force of volunteer recruits including many Pals' Battalions, recruited from the same places and occupations. The battle was intended to hasten a victory a Biography of Vincent Van Gogh for the Allies and was the largest battle of the First World War on the. Many casualties were inflicted on the Germans but the French made slower progress. When relieved the brigade had lost 2,536 men, similar to the casualties of many brigades on 1 July. Tactical developments edit The original British Expeditionary Force (BEF) of six divisions and the Cavalry Division, had lost most of the army's pre-war regular soldiers in the battles of 19The bulk of the army was made up of volunteers of the Territorial Force and Lord.
Success there and at Mouquet Farm allowed Gough to threaten the German fortress at Thiepval. Subsequent operations edit Ancre, JanuaryMarch 1917 edit Main article: Operations on the Ancre, JanuaryMarch 1917 After the Battle of the Ancre (1318 November 1916 British attacks on the Somme front were stopped by the weather and military operations by both sides were mostly restricted. Dubbed the Battle of Albert, Haig persisted in pushing forward over the next several days. There was a lot of disease in the trenches. The original Allied estimate of casualties on the Somme, made at the Chantilly Conference on 15 November 1916, was 485,000 British and French casualties and 630,000 German. |
0.999996 | Našli jsme další záznamy k osobě Nora Landman.
Nora Landman je pohřben(a) na hřbitově Saint Tudor Church v místě zobrazeném na níže uvedené mapě. Tyto informace o GPS jsou k dispozici POUZE na stránkách BillionGraves. Naše technologie vám pomůže najít umístění hrobu a také další členy rodiny, pohřbené poblíž.
Nora Landman was 14 years old when World War II: German forces in the west agree to an unconditional surrender. The German Instrument of Surrender ended World War II in Europe. The definitive text was signed in Karlshorst, Berlin, on the night of 8 May 1945 by representatives of the three armed services of the Oberkommando der Wehrmacht (OKW) and the Allied Expeditionary Force together with the Supreme High Command of the Red Army, with further French and US representatives signing as witnesses. The signing took place 9 May 1945 at 00:16 local time.
Nora Landman was 22 years old when Jonas Salk announced the successful test of his polio vaccine on a small group of adults and children (vaccination pictured). Jonas Edward Salk was an American medical researcher and virologist. He discovered and developed one of the first successful polio vaccines. Born in New York City, he attended New York University School of Medicine, later choosing to do medical research instead of becoming a practicing physician. In 1939, after earning his medical degree, Salk began an internship as a physician scientist at Mount Sinai Hospital. Two years later he was granted a fellowship at the University of Michigan, where he would study flu viruses with his mentor Thomas Francis, Jr.
Nora Landman was 32 years old when John F. Kennedy was assassinated by Lee Harvey Oswald in Dallas, Texas; hours later, Lyndon B. Johnson was sworn in aboard Air Force One as the 36th President of the United States. John Fitzgerald Kennedy, commonly referred to by his initials JFK, was an American politician who served as the 35th President of the United States from January 1961 until his assassination in November 1963. He served at the height of the Cold War, and the majority of his presidency dealt with managing relations with the Soviet Union. As a member of the Democratic Party, Kennedy represented the state of Massachusetts in the United States House of Representatives and the U.S. Senate prior to becoming president.
Nora Landman was 46 years old when Star Wars is released in theaters. Star Wars is a 1977 American epic space opera film written and directed by George Lucas. It is the first film in the original Star Wars trilogy and the beginning of the Star Wars franchise. Starring Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing, Alec Guinness, David Prowse, James Earl Jones, Anthony Daniels, Kenny Baker, and Peter Mayhew, the film focuses on the Rebel Alliance, led by Princess Leia (Fisher), and its attempt to destroy the Galactic Empire's space station, the Death Star.
Nora Landman was 55 years old when Space Shuttle program: STS-51-L mission: Space Shuttle Challenger disintegrates after liftoff, killing all seven astronauts on board. The Space Shuttle program was the fourth human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished routine transportation for Earth-to-orbit crew and cargo from 1981 to 2011. Its official name, Space Transportation System (STS), was taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development. |
0.999886 | I'm an artist. How do I get a gallery?
Aunt Bessie died and left me a painting. What now?
There are several new sites that do appraisals for a modest fee. If you cannot read the artist's signature on the work, and have absolutely no idea what you have, these sites can help.. They are: www,collectingchannel.com, www.appraiseitnet.com, www.evalueit.com, www.auctionwatch.com.
Do your research. Who did the painting? Are they listed? Take a photo of the painting and, take it to your local museum or art history department. Can they help you with the artist's name? If you get this information, look the painter up in books in the local library and make a photocopy of the artist's biography. Perhaps a curator from your local museum knows the name of a dealer who specializes in work from the region and period of the painting. Get an appraisal from her. Send a copy of the biography and photograph to Sotheby's or Christie's or a local auction house for an appraisal. When you have this information, either put the painting up at auction, give it to the dealer for resale, give it to a museum for a tax deduction, or enjoy it yourself.
Aunt Bessie was an artist. She died and left me with 100 paintings. Now what?
I always recommend that you visit your local university art history department and try to hook up with an art history student who might be interested in researching Bessie's career. These budding art historians are always looking for an undiscovered, deceased artist to bring to the market in an effort to make their name. Lastly, contact a lawyer involved with the art market to appraise the estate.
I saw a painting I liked in a local gallery. Should I ask for a discount?
Sure, everyone else does - Why not you?. The Art Lady's law of the discount - "Prices of art never go down. Discounts on art go up" Whether or not you'll get it is another question. Good situations for discounts include - if the artist is not selling well (obviously), if the dealer feels you might be an ongoing client he wants to cultivate, or if there the dealer has had the piece in inventory for a while and his cost was low. There are situations where a the dealer cannot consider a discount such as if he's gotten the piece on consignment from a collector and has little negotiating room, or if the artist is selling out. It never hurts to ask. |
0.999998 | My grandmother is on hospice and has only a few months to live. What are some things that I should have in order?
I am so sorry to hear about your grandmother, but from a legal standpoint, you would want to find out if your grandma has a Advance Health Care Directive (AHCD), which is a legal document that names an agent to make medical decisions on her behalf. You would want to discuss her wishes regarding life support and CPR. The documents that would memorialize her wishes regarding end of life decisions would be a DNR (now called POLST) and a Living Will.
On the financial side, I'd consider whether she has a durable power of attorney nominating someone to manage her finances. However, because a durable Power of Attorney is null and void once someone passes away, she would also need to consider having a Will or a Trust. Depending on state laws a Will may not be sufficient to avoid a court process when she passes so you might want to consider speaking to an attorney near you about that. |
0.984925 | Sichuan cuisine, Szechwan cuisine, or Szechuan cuisine (/ˈsɛʃwɒn/ or /ˈsɛtʃwɒn/; Chinese: 四川菜; pinyin: Sìchuān cài or Chinese: 川菜; pinyin: Chuān cài) is a style of Chinese cuisine originating from Sichuan province in southwestern China. It has bold flavours, particularly the pungency and spiciness resulting from liberal use of garlic and chili peppers, as well as the unique flavor of the Sichuan pepper. There are many local variations within Sichuan province and the Chongqing municipality, which was part of Sichuan until 1997. Four sub-styles include Chongqing, Chengdu, Zigong, and Buddhist vegetarian style.
UNESCO declared Chengdu to be a city of gastronomy in 2011 in order to recognize the sophistication of its cooking.
Sichuan in the Middle Ages welcomed Near Eastern crops, such as broad beans, sesame, and walnuts, and starting in the 16th century its list of major crops was lengthened by New World newcomers. The characteristic chili pepper came from Mexico, but probably overland from India or by river from Macao, replacing the spicy peppers of ancient times and complementing the Sichuan pepper (huajiao). Other newcomer from the New World included maize (corn), which largely replace millet; white potatoes introduced by Catholic missions, and sweet potatoes. The population was cut by perhaps three quarters in the wars from the Ming to the Qing dynasty and settlers from nearby Hunan province brought their cooking styles with them.
Sichuan is colloquially known as the "heavenly country" due to its abundance of food and natural resources. One ancient Chinese account declared that the "people of Sichuan uphold good flavor, and they are fond of hot and spicy taste." Most Szechuan dishes are spicy, although a typical meal includes non-spicy dishes to cool the palate. Szechuan cuisine is composed of seven basic flavours: sour, pungent, hot, sweet, bitter, aromatic, and salty. Szechuan food is divided into five different types: sumptuous banquet, ordinary banquet, popularised food, household-style food, and food snacks. Milder versions of Sichuan dishes remain a staple of American Chinese cuisine.
Sichuan's geography of mountains and plains and location in the western part of the country has shaped food customs. The Sichuan Basin is a fertile producer of rice and vegetables, while a wide variety of plants and herbs prosper in the upland regions, as well as mushrooms and other fungi. Yoghurt, which probably spread from India through Tibet in medieval times, is consumed among the Han Chinese, a custom which is unusual in other parts of the country. Unlike sea salt, the salt produced from Sichuan salt springs and wells does not contain iodine, leading to goiter problems before the 20th century.
Szechuan cuisine often contains food preserved through pickling, salting, and drying and is generally spicy owing to heavy application of chili oil. The Sichuan pepper (Chinese: 花椒; pinyin: huājiāo; literally: "flower pepper") is commonly used. Sichuan pepper has an intensely fragrant, citrus-like flavour and produces a "tingly-numbing" (Chinese: 麻; pinyin: má) sensation in the mouth. Also common are garlic, chili peppers, ginger, star anise, and other spicy herbs, plants and spices. Broad bean chili paste (simplified Chinese: 豆瓣酱; traditional Chinese: 豆瓣醬; pinyin: dòubànjiàng) is also a staple seasoning. The region's cuisine has also been the source of several prominent sauces widely used in Chinese cuisine in general today, including yuxiang (魚香), mala (麻辣), and guaiwei (怪味).
Common preparation techniques in Szechuan cuisine include stir frying, steaming and braising, but a complete list would include more than 20 distinct techniques.
Pork is overwhelmingly the major meat. Beef is somewhat more common in Szechuan cuisine than it is in other Chinese cuisines, perhaps due to the prevalence of oxen in the region. Stir-fried beef is often cooked until chewy, while steamed beef is sometimes coated with rice flour to produce a very rich gravy. Szechuan cuisine also utilizes various bovine and porcine organs as ingredients, such as intestine, arteries, the head, tongue, skin, and liver, in addition to other commonly utilised portions of the meat.
Rabbit meat is also much more popular in Sichuan than elsewhere in China, with the Sichuan Basin and Chongqing estimated to consume some 70 percent of China's rabbit meat consumption.
Although many dishes live up to their spicy reputation, there are a large percentage of recipes that use little or no hot spices at all, including dishes such as tea smoked duck.
↑ "Szechuan." at Merriam-Webster Online.
↑ Fuchsia Dunlop. Land of Plenty: A Treasury of Authentic Sichuan Cooking. (New York: W.W. Norton, 2003; ISBN 0393051773).
↑ UNESCO (2011). "Chengdu: UNESCO City of Gastronomy". UNESCO. Retrieved May 26, 2011.
↑ 5.0 5.1 傅培梅 (2005). Mei Pei Featured Dishes (CH: 培梅名菜精選: 川浙菜專輯). 橘子文化事業有限公司. p. 9.
↑ Tropp, Barbara (1982). The Modern Art of Chinese Cooking. New York: Hearst Books. p. 183. ISBN 0-688-14611-2.
Fuchsia Dunlop. Land of Plenty : A Treasury of Authentic Sichuan Cooking. New York: W.W. Norton, 2003. ISBN 0393051773.
Fuchsia Dunlop. Shark's Fin and Sichuan Pepper: A Sweet-Sour Memoir of Eating in China. (New York: Norton, 2008). ISBN 9780393066579. The author's experience and observations, especially in Sichuan.
Jung-Feng Chiang, Ellen Schrecker and John E. Schrecker. Mrs. Chiang's Szechwan Cookbook : Szechwan Home Cooking. New York: Harper & Row, 1987. ISBN 006015828X.
Eugene Anderson. "Sichuan (Szechuan) Cuisine," in Solomon H. Weaver William Woys Katz. Encyclopedia of Food and Culture. (New York: Scribner, 2003; ISBN 0684805685). Vol I pp. 393–395. |
0.993631 | Hey can you design a identity system for some popcorn that conveys the spirit of Brazil? Absolutely! A small popcorn shop in Sao Paulo contacted us and wanted us to create some lively packaging that would make the customers want to interact with the product. The execution is a full bleed illustration of Rio De Janeiro (one of the main cities in Brazil) in colors that "pop". The idea was to have more city illustrations as the brand grows to give the consumer a greater variety. The different colors distinguish the unique flavors they have to offer. |
0.999997 | This article is about the upcoming 2020 film. For the 1962 film, see King Kong vs. Godzilla.
Godzilla vs. Kong is an upcoming American monster film directed by Adam Wingard. It is a sequel to Godzilla: King of the Monsters (2019) and Kong: Skull Island (2017), and will be the fourth film in Legendary's MonsterVerse. The film will also be the 36th film in the Godzilla franchise, the ninth film in the King Kong franchise, and the fourth Godzilla film to be completely produced by a Hollywood studio.[Note 1] The film stars Alexander Skarsgård, Millie Bobby Brown, Rebecca Hall, Brian Tyree Henry, Shun Oguri, Eiza González, Jessica Henwick, Julian Dennison, Kyle Chandler, and Demián Bichir.
The project was announced in October 2015 when Legendary announced plans for a shared cinematic universe between Godzilla and King Kong. The film's writers room was assembled in March 2017 and Wingard was announced as the director in May 2017. Principal photography began in November 2018 in Hawaii and Australia and wrapped in April 2019. Godzilla vs. Kong is scheduled to be released on March 13, 2020, in 2D, 3D, and IMAX.
Zhang Ziyi reprises her role as Dr. Chen, with Van Marten cast as her assistant. Lance Reddick has also been cast in an undisclosed role.
In September 2015, Legendary moved Kong: Skull Island from Universal to Warner Bros., which sparked media speculation that Godzilla and King Kong would appear in a film together. In October 2015, Legendary confirmed that they would unite Godzilla and King Kong in Godzilla vs. Kong, at the time targeted for a May 29, 2020, release. Legendary plans to create a shared cinematic franchise "centered around Monarch" that "brings together Godzilla and Legendary’s King Kong in an ecosystem of other giant super-species, both classic and new."
Producer Alex Garcia confirmed that the film will not be a remake of the Toho version, stating, "the idea is not to remake that movie." In May 2016, Warner Bros. announced that the film would be released on May 29, 2020. In May 2017, Warner Bros. bumped the film's original release date to a week earlier, from May 29 to May 22, for a Memorial Day weekend release. That same month, Adam Wingard was announced as the director for Godzilla vs. Kong.
"I really want you to take those characters seriously. I want you to be emotionally invested, not just in the human characters, but actually in the monsters. It’s a massive monster brawl movie. There’s lots of monsters going crazy on each other, but at the end of the day I want there to be an emotional drive to it. I want you to be emotionally invested in them. I think that’s what’s going to make it really cool."
Wingard also confirmed that the film will tie in with Godzilla: King of the Monsters, be set in modern times, and feature a "more rugged, a bit more aged Kong."
"Godzilla vs. Kong was my first experience running a writer's room, and it was fantastic. It was a blast reading samples, meeting different writers, and crafting a story in a group setting. It felt similar to animation, where the film is happening up on the walls, and the end result is better than any one person could accomplish on their own."
Michael Dougherty and Zach Shields, the director and co-writers of Godzilla: King of the Monsters, did rewrites to ensure that certain themes from King of the Monsters were carried over to the film and that some characters were properly developed.
In June 2017, it was announced that Ziyi Zhang had joined Legendary's Monsterverse, having a reportedly "pivotal" role in both Godzilla: King of the Monsters and Godzilla vs. Kong. In June 2018, Julian Dennison was cast alongside Van Marten, Millie Bobby Brown, and Kyle Chandler, who would reprise their roles from Godzilla: King of the Monsters. Legendary also sent an offer to Frances McDormand for a role. In July 2018, it was revealed that Danai Gurira was in early talks to join the film. In October 2018, Brian Tyree Henry, Demián Bichir, Alexander Skarsgård, Eiza González, and Rebecca Hall were added to the cast. In November 2018, Jessica Henwick, Shun Oguri, and Lance Reddick joined the cast of the film.
Principal photography began on November 12, 2018 in Hawaii and Australia and was expected to end in February 2019 under the working title Apex. Production was initially slated to begin on October 1, 2018. For the Hawaii shoot, the crew filmed on the USS Missouri, Manoa Falls, and downtown Honolulu. The crew have established base in the Kalanianaole Highway, which would be closed until November 21. Local crews and extras will be used for the film. In January 2019, filming resumed in Gold Coast, Queensland at Village Roadshow Studios for an additional 26 weeks. In April 2019, Wingard confirmed via Instagram that principal photography had wrapped.
Godzilla vs. Kong is scheduled to be released on March 13, 2020 in 2D, 3D, and IMAX by Warner Bros. Pictures, except in Japan where it will be distributed by Toho. The film was previously scheduled to be released on May 29 and May 22, 2020.
^ a b c d "Warner Bros. Pictures' and Legendary Entertainment's Monsterverse Shifts into Overdrive as Cameras Roll on the Next Big-Screen Adventure "Godzilla Vs. Kong"". Business Wire. Archived from the original on January 19, 2019. Retrieved November 12, 2018.
^ Fleming Jr., Mike (September 10, 2015). "King Kong On Move To Warner Bros, Presaging Godzilla Monster Matchup". Deadline Hollywood. Archived from the original on 2015-09-11. Retrieved September 10, 2015.
^ Masters, Kim (September 16, 2015). "Hollywood Gorilla Warfare: It's Universal vs. Legendary Over 'Kong: Skull Island' (and Who Says "Thank You")". The Hollywood Reporter. Archived from the original on 2015-09-17. Retrieved September 17, 2015.
^ "Legendary and Warner Bros. Pictures Announce Cinematic Franchise Uniting Godzilla, King Kong and Other Iconic Giant Monsters" (Press release). Legendary Pictures. October 14, 2015. Archived from the original on 2015-11-05. Retrieved October 14, 2015.
^ Mirjahangir, Chris (December 2, 2015). "Interview: Alex Garcia – Roundtable (2015)". Toho Kingdom. Archived from the original on 2017-06-19. Retrieved July 14, 2017.
^ a b Rahman, Abid (May 10, 2016). "Warner Bros. Moves Dates For 'Godzilla 2,' 'Godzilla vs Kong'". The Hollywood Reporter. Archived from the original on 2017-03-30. Retrieved May 10, 2016.
^ Busch, Jenna (May 3, 2017). "Godzilla vs. Kong and More Release Date Changes From Warner Bros". Coming Soon. Archived from the original on 2017-05-04. Retrieved May 3, 2017.
^ Kit, Borys (May 30, 2017). "'Godzilla vs. Kong' Finds Its Director With Adam Wingard (Exclusive)". The Hollywood Reporter. Archived from the original on 2017-05-31. Retrieved May 30, 2017.
^ Gingold, Michael (July 20, 2017). "Adam Wingard Talks Godzilla vs. Kong And Directorial Freedom". Birth.Movies.Death. Archived from the original on 2017-07-23. Retrieved July 20, 2017.
^ Whitney, E. Oliver (August 18, 2017). "Adam Wingard Wants 'Godzilla vs. Kong' to Make You Cry". Screen Crush. Archived from the original on 2017-08-19. Retrieved August 19, 2017.
^ Nordine, Michael (August 20, 2017). "'Godzilla vs. Kong': Adam Wingard Says the Epic Battle Will Have a Definitive Winner". IndieWire. Archived from the original on 2018-01-09. Retrieved January 8, 2018.
^ Mithaiwala, Mansoor (August 22, 2017). "Godzilla vs. Kong Set in Modern Day, Ties to Godzilla 2". Screen Rant. Archived from the original on December 24, 2017. Retrieved August 24, 2017.
^ Aiken, Keith (May 10, 2015). "Godzilla Unmade: The History of Jan De Bont's Unproduced TriStar Film – Part 1 of 4". Scifi Japan. Archived from the original on September 15, 2017. Retrieved May 23, 2017.
^ Kit, Borys (March 10, 2017). "'Godzilla vs. Kong' Film Sets Writers Room (Exclusive)". The Hollywood Reporter. Archived from the original on March 10, 2017. Retrieved March 10, 2017.
^ Schoellkopf, Christina (May 26, 2017). "Original 'Pirates of the Caribbean' Screenwriter on How a Budget Crisis Changed the Villains". The Hollywood Reporter. Archived from the original on May 27, 2017. Retrieved May 27, 2017.
^ Tyler, Jacob (March 5, 2019). "Godzilla vs. Kong Got Rewrites From Mike Dougherty & Zach Shields". Omega Underground. Archived from the original on March 13, 2019. Retrieved March 13, 2019.
^ Hipes, Patrick (June 8, 2017). "Zhang Ziyi Comes Aboard 'Godzilla' And Beyond". Deadline.com. Archived from the original on 2017-06-11. Retrieved June 8, 2017.
^ Perez, Lexy (June 2, 2018). "Deadpool 2 Star Julian Dennison Joins Godzilla vs. Kong". The Hollywood Reporter. Archived from the original on 2018-06-03. Retrieved June 3, 2018.
^ Murphy, Charles (June 1, 2018). "Exclusive: 'Deadpool 2's' Julian Dennison Joins 'Godzilla vs. Kong'". That Hashtag Show. Archived from the original on 2018-06-12. Retrieved June 1, 2018.
^ Fleming Jr., Mike (July 12, 2018). "Danai Gurira In Early 'Godzilla Vs. Kong' Talks As 'Star Trek' Also Looms For 'Walking Dead' & 'Black Panther' Star". Deadline. Archived from the original on July 13, 2018. Retrieved July 12, 2018.
^ Kroll, Justin (October 10, 2018). "Brian Tyree Henry to Co-Star With Millie Bobby Brown in 'Godzilla vs. Kong' (Exclusive)". Variety. Archived from the original on 2018-10-11. Retrieved October 10, 2018.
^ Kit, Borys (October 17, 2018). "Demián Bichir Joining Millie Bobby Brown in Godzilla vs. Kong". The Hollywood Reporter. Valence Media. Retrieved October 17, 2018.
^ Jr, Mike Fleming (October 25, 2018). "Alexander Skarsgård To Star In 'Godzilla Vs. Kong'". Deadline. Archived from the original on 2018-10-26. Retrieved October 25, 2018.
^ Fleming Jr, Mike (October 30, 2018). "Eiza Gonzalez Joins 'Godzilla Vs. King Kong". Deadline Hollywood. Archived from the original on 2018-10-31. Retrieved October 30, 2018.
^ Kroll, Justin (October 30, 2018). "Rebecca Hall to Star Opposite Millie Bobby Brown in 'Godzilla vs. Kong' (EXCLUSIVE)". Variety. Archived from the original on 2018-10-31. Retrieved October 30, 2018.
^ Kroll, Justin (November 8, 2018). "'Game of Thrones' Actress Jessica Henwick Joins 'Godzilla vs. Kong' (Exclusive)". Variety. Retrieved November 8, 2018.
^ Fleming Jr, Mike (November 11, 2018). "Japanese Star Shun Oguri Makes Hollywood Debut In 'Godzilla Vs. Kong'". Deadline. Archived from the original on 2018-11-11. Retrieved November 11, 2018.
^ N'Duka, Amanda (November 14, 2018). "'Bosch' Actor Lance Reddick Cast in 'Godzilla vs Kong'". Deadline. Retrieved November 14, 2018.
^ Prasad, R.A (May 3, 2018). "Warner Bros. And Legendary Pictures' Godzilla Vs. Kong Working Title Revealed". PureNews. Archived from the original on 2018-05-04. Retrieved May 3, 2018.
^ Marc, Christopher (July 11, 2018). "'Godzilla vs Kong' Heading Back To Australia and Hawaii - GWW". GWW. Archived from the original on 2018-10-11. Retrieved July 18, 2018.
^ Wu, Nina (November 17, 2018). "'Godzilla vs. Kong' filming in full swing with upcoming closures on Oahu". Star Advertiser. Retrieved November 17, 2018.
^ Caldwell, Felicity (January 18, 2019). "Godzilla vs. Kong begins filming on the Gold Coast". Brisbane Times. Archived from the original on January 19, 2019. Retrieved January 19, 2019.
^ "Godzilla sighting down under!". Moviehole. Editorial Staff. Retrieved 24 January 2019.
^ Libbey, Dirk (April 9, 2019). "Godzilla Vs. Kong Has Wrapped In Australia". Cinema Blend. Archived from the original on April 9, 2019. Retrieved April 9, 2019. |
0.999997 | Nokia has a lot riding on the success of the Nokia Lumia line. Nothing short of the survival of the firm is at stake. But only a tad more than a month since it's U.S. launch, you can find Nokia Lumia models discounted heavily with some carriers offering certain models for free with a signed two year pact. Last month, the top-shelf Nokia Lumia 920 launched in the States as an AT&T exclusive for $99 on contract while the Nokia Lumia 822, an exclusive to Verizon, was launched for $99 with a signed two-year contract. The Nokia Lumia 920 is still $99 at AT&T, but can be found for $49.99 at Amazon. The Nokia Lumia 822 is now free at Big Red, with a signed two-year pact. For what it's worth, the Nokia Lumia 920 is free from China Unicom with a 2-year handcuff and the version of the phone designed for China Mobile's proprietary TD-SCDMA network is only 1 Yuan on contract.
For the most part, the Nokia Lumia 920 has received good reviews and there are fans of the whole line. Just the other day, billionaire Mark Cuban said that he replaced his Apple iPhone 5 with an unnamed Nokia Luimia model, saying that it "crushed" the Apple iPhone 5. Most likely, the outspoken Dallas Mavs owner sports the Nokia Lumia 920.
Other carriers are offering Nokia Lumia models for free, including T-Mobile which is giving away the Nokia Lumia 810 to those who sign a two-year contract. The carrier said its deal was part of a limited time offering while Verizon wouldn't comment on its Nokia Lumia 822 pricing. Nokia really doesn't have much say in the pricing decisions. A spokesman for the Finnish based carrier, Doug Dawson, noted that "pricing is always a carrier decision, but holiday season promotions are fairly standard at this time of year." As if to prove that it isn't just Nokia phones being offered for free this holiday season, Dawson pointed out that some Samsung branded phones are free in certain markets.
Nokia hasn't released sales figures for the line, and the fact that there are so many discounts revolving around the manufacturer's Lumia models so soon after launch (despite the holiday season), has some analysts worried that sales figures are not as good as Wall Street expected and that Nokia needs.
What is the point of this article from the WSJ? Are they bored?
Is this article about Nokia phones being discounted (not in Nokia's hands) or about how you think they aren't selling many phones? Phone arena is good for adding drama to the article to make sure they get their link clicks. That last paragraph turned the article from informative to speculative.
@freebee269 2 thumbs up! When it comes to MS, most tech sites do like to stir the pot. My assumption is because of MS's radical departure from the mainstream, the modern UI. Society now lives in a collective. If it doesn't do what everyone else does, they reject it.
I like the Nokias but they sure are a heavy bunch of phones -- and yet look like toys. As a graphic designer I appreciate the colors and the fact that two, at least, are on AT&T's 4G LTe network. Great data speeds there. But that weight... Ugh! |
0.99999 | It's true that social interactions can be smoothed if people follow the same rules.
It's also true that social interactions can be smoothed if people assume good will on the part of other people they're interacting with, rather than making up other kinds of stories about them, such as that they are trying to be insulting or superior.
For example, a person can assume that someone means well but came from another culture where the politeness rules differ. A person can educate themself about other cultures' politeness rules and then use that knowledge to refine the stories that they make up about other people's behavior.
I think it's usually easier for a person to change the stories they make up about other people than to change other people's behavior. So if a person is getting upset partly because they are making assumptions that someone else is being rude or arrogant or self-important, changing the story they're making up might help them feel less upset.
In other cases, the behavior might bother them even if they know there are possibly good-will or legitimate reasons for it. Changing the stories might not help with that.
And sometimes the evidence becomes overwhelming that a person does intend to be insulting or does feel superior, in which case assuming good will might be counterproductive.
1) When a person doesn't say "Thank you" to a compliment, they might come from a culture with different rules about compliments or might be uncomfortable about what they were complimented on. It might not be because they are feigning humility.
5) If a person corrects another person, they might come from a culture where correcting a person is a sign of respect for that person. Maybe they are not trying to show the person up up as stupid.
8) If a person shares their medical diagnosis, this might be an act of trust on their part, rather than an attempt to excuse themselves from following the rules. It might be part of an apology. Some people, when they apologize, start by explaining what led to their actions, and don't mean by the explanation that they should therefore be let off the hook for bad behavior.
9) If someone makes plans and doesn't show up, there might have been an emergency that prevented them from showing up. If someone is late, they might not be very good at estimating how much time it takes them to get somewhere.
15) If someone is sitting in the corner, maybe it's because they are disabled and that's where the host put a chair for them. Maybe it's because they are temporarily taking a break from the conversation. It's not necessarily because they think they're too important to make a social move.
18) If someone uses a calculator to figure the tip, maybe they find arithmetic difficult, or maybe they are from a culture that doesn't include tipping so they aren't used to it. It doesn't necessarily mean they are cheap.
20) If someone replies tersely to an electronic communication, they might be trying to show respect for another person's time (assuming that the person gets lots of e-mail and trying to minimize the amount of effort required to process the e-mail). They aren't necessarily being hostile.
plymouth just posted a fascinating metaphor in one of my friends' friends-locked posts.
Social groups are hollow spheres - everyone's on the edge and noone is in the middle. That's the theory I came up with a few years back to explain the fact that all my friends seem to think they're on the fringes somehow. I guess you could say there are different shells to the spheres and some people are in the inner shells and some in the outer shells. Kinda like atoms. People are electrons. Nobody is at the nucleus.
( ) Are you generally happy?
( ) Do you “enjoy” your job?
( ) Do you have time for hobbies?
Also, many people who scored "low" on the meme's "fortunate" scale said they were quite satisfied with their lives and thought they were very fortunate, thank you.
"This is the Google side of your brain"
I find it interesting that the USA Today article doesn't make the connection between the usefulness of search engines and the aging of the population. More often than before, words and facts I used to know temporarily go missing. If I'm at my computer, I can look 'em up again. |
0.965739 | The mission of the Dharma Academy of North America (DANAM) is to identify strategies for, and undertake, the recovery, reclamation and reconstitution of Dharma traditions for the contemporary global era, with initial focus on Hindu Dharma and subsequently on other Dharma traditions. It seeks to define the unifying vision that underlies all Dharma traditions as well as to communicate the rich diversity of Dharma philosophy and theology, by providing bridges between, and networks among, the practicing Dharma scholars and the Diaspora Dharma communities in North America. It aims to devise methods for the study and resolution of the problems caused by the juxtaposition of religious and national identities within a given cultural context by using the category of Dharma, in contrast to that of ‘Religion’, as the lens through which to view faith and belief systems.
The word religion is often employed to refer to the four ‘religions’ of Indian origin: Hinduism, Buddhism Jainism and Sikhism, which consider themselves as dharma-s, or systems belonging to what might be called the network of Dharma traditions.
The use of the word religion, which arose in the context of Christianity and was subsequently secularized into global use, to denote the religions of Indian origin is, however, not unproblematic. Three features are closely associated with the concept and, therefore, the definition of religion in a Western context: (1) that it is 'conclusive', which is to say that it is the final religion; (2) that it is 'exclusionary', which is to say that those who do not belong to it are excluded from salvation; and (3) that it is 'separative', which is to say that one who belongs to it, separates oneself from allegiance to other religions.
The religions of Indian origin - the Dharma-s - do not share these features. They are non-conclusive, in the sense that they are not the only path to salvation; they are non-exclusionary, in the sense that their membership is a sufficient but not a necessary condition for salvation; and they are non-separative, in the sense that one need not necessarily negate one’s previous identity to join them, or to disown one’s culture, ancestry, or name.
It, therefore, makes more sense to refer to these ‘religions’ by the term 'Dharma' than by the term religion. At this point, the question might arise, what is the need to draw this distinction now.
Prior to the emergence of the academic study of religion in the 1860s, most of the communication taking place in religious studies, broadly speaking, followed an “insider-to-insider” pattern, namely, most Hindus wrote for an audience of other Hindus, Christians, for other Christians, and so on. However, as the West expanded imperially during the 17th and 18th centuries, Westerners began to write about the various religious traditions they encountered for the benefit of other Westerners, so that “outsider-to-outsider” also became a major mode of communication. With the establishment and spread of imperial educational systems in the colonized world, the colonized peoples themselves increasingly began to acquire knowledge about their own religious traditions through the works of the Western scholars about them, resulting in communication being acquired in an “outsider-to-insider” mode.
After the 1960s, with the end of the colonial era, the followers of the religious traditions in the formerly colonized nations began to react to the often unfavorable depiction of their religious traditions by outsiders, causing the emergence of an “insider-to-outsider” mode of communication. The growing sentiment in favor of using the proper term Dharma rather than “Hinduism” (or some other term or terms) to describe the ‘Hindu’ religious reality reflects this development.
The term “Religion’ is derived from the Greek word ‘religio,’ meaning ‘to bind again,’ that got interpreted as being bound again to sets of doctrines (or laws) and their respective founders, as opposed to God alone or the individual’s inner self. Thus, each religion requires that adherence to its doctrines and its founder is the only path to attain salvation, as mentioned earlier.
The term ‘Dharma’, like many other Sanskrit words, has no exact equivalent in English, so its exact translation is rather difficult. It has been variously translated as ‘religion’ (which strictly is incorrect, as described earlier in this section), ‘law,’ ‘duty,’ ‘religious rite,’ ‘code of conduct,’ etc. It can mean one or more or all of the latter, depending upon the context. The reason seems to be that the word itself has been used in various senses throughout the ages, and its meaning, as well as scope, has been expanded.
From the perspective of the Hindu tradition(s), Dharma is none other than the Supreme Being or Godhead (Brahman, Ishvara, or Paramaatma), or what the Upanishads describe as sat or tat, the very essence of one’s being. In addition, whatever conduct or way of life helps us to reveal this fundamental principle (that is, our inherent essence or nature) in us, can also be called dharma, though in a secondary sense. Hence, ‘religious’ rites, ceremonies and observances; fixed principles of conduct, privileges, duties and obligations of a person depending upon one's stage of life and status in society; and even rules of law, customs and manners of society — every one of these (categories) can be included under the term Dharma.
It may be instructive to note two more ancient words, rta and satya (truth), that are closely connected with, if not forms of, dharma. The word rta, used profusely in the Vedas, especially Rgveda and Krishna Yajurveda, in its simplest form, seems to indicate ‘a straight or direct line,’ and hence, ‘universal laws of nature, an impersonal order.’ When extended to the ‘moral’ world, rta denotes a ‘straight conduct’ based on truth, which itself is also ‘dharma.’ Used in the sense of an inner awareness of what is true, as expressed through words and actions based on the scriptural teachings and needs of duties on hand, rta becomes satya (Truth). Thus, the meanings of all the three words, rta, satya, and dharma, more or less coalesce.
The Sanskrit word for world is ‘jagat,’ literally meaning that which is continuously changing, which embodies that change occurs in a periodic (cyclic or pulsating) or phase-changing manner without beginning (creation) and without end (destruction), then the question arises: what is the foundation on which this jagat is being continuously sustained? According to Chandogya Upanishad 6.2.1-3, ‘in the beginning sat alone existed, the One without a Second. It (sat) reflected, “May I become many! May I be born!”’. The ‘many’ that emerged needed a central integrating principle, or law; otherwise, chaos would result. This law or principle is ‘Dharma,’ which emerged from Godhead itself, per Shukla Yajur Veda, Brhadaranyaka Upanishad 1.4.14, Godhead ‘specially created that dharma, in the form of the highest good — therefore, there is nothing higher than dharma — verily, that which is dharma is satya’. This dharma is the firm foundation upon which the entire universe stands (‘dharmo visvasya jagatah pratistha’ Mahanärayana Upanisad 79.7). Obviously here, dharma means righteous conduct based on truth (satya) and knowledge of the unity in spite of the diversity, and capable of bringing the highest good to the whole of cosmos (jagat). All other meanings, senses and derivations of Dharma in later literature are corollaries of this central idea.
From the perspective of Buddhist tradition, the use of the term Dharma is instructive in that, at a basic level, Dharma is taken to mean “the teachings of the Buddha.” But these teachings are seen as embodying Truth itself. Thus, the deeper understanding of Dharma (in Pali, Dhamma) is linked to the foundation of Reality. The Buddhist Abhidhamma literature, for example, does a thoroughgoing enumeration and classification of what it calls the ‘dhamma-s’ (in Sanskrit, dharma-s). These are the fundamental patterns — including certain groups of spiritual qualities — which comprise the underlying networks that generate the processes of psychological and physical phenomena. The word “dhamma” is used to convey both the ongoing process of the constant arising and passing of events of fleeting duration as well as the events themselves. The Abhidhamma breaks down forms and phenomena into component dhamma-s that form the fundamental patterns nesting within the Greater Network that is the very nature of Ultimate Reality (Dhamma).
The concept of pratitya samutpada, Conditioned Arising (also referred to as dependent origination, and co-dependent co-arising), is fundamental to the Buddhist understanding of the nature of Ultimate Reality (Dhamma). The Majjhima Nikaya (1.191) states: “Whoever sees Conditioned Arising, sees Dhamma, whoever sees Dhamma, sees Conditioned Arising.” The experience of Conditioned Arising engendered by meditative disciplines, is key to developing an appreciation for interdependence (or, as Buddhist teacher Thich Nat Han suggests, “interbeing”) of all component functions of physical reality. Thus the term Dharma-kaya (Dharma-body) of the tri-kaya doctrine of Mahayana Buddhism, has a two-fold meaning whereby it refers both to the ultimate “body” or form of Gautama Buddha (and, indeed, all Buddhas), as well as the self-existent form (svabhavika-kaya) of tathata (things-as-they-are, thus-ness, such-ness), of sunyata (emptiness) or the non-essential nature that is the true nature of all dhamma-s. As the Astasahasrika Prajna-paramitaSutra (307) proclaims, the such-ness of the Tathagata (Buddha) and the such-ness of the all dharma-s are not two separate things but an undivided reality.
We believe, and the history of the past two millennia demonstrates, that Dharma, in contradistinction to Religion, provides an appropriate methodological and experiential lens by which to view and appreciate diversity. It is, therefore, worth examining whether the concept could be expanded beyond India to serve as a model for interfaith interactions, in general. |
0.999823 | Lurching from one weather extreme to another seems to have become routine across the Northern Hemisphere. Parts of the United States may be shivering in March, but Scotland is setting heat records. Across Europe, people died by the hundreds during a severe cold wave in the first half of February, but a week later revelers in Paris were strolling down the Champs-Élysées in their shirt-sleeves.
Does science have a clue what is going on? The short answer appears to be: not quite. The longer answer is that researchers are developing theories that, should they withstand critical scrutiny, may tie at least some of the erratic weather to global warming. Specifically, suspicion is focused these days on the drastic decline of sea ice in the Arctic, which is believed to be a direct consequence of the human release of greenhouse gases. |
0.958744 | It's time for our electrical grid system to get smart. More precisely, it's time for a smart grid.
Our electrical system consists of power-producing plants (coal, natural gas, oil, solar, wind, nuclear and hydro) and consumers of electricity (houses, schools, commercial buildings and industrial plants).
The producing plants generate electricity based upon anticipated demand, always ensuring that they produce more than the expected need.
In the traditional grid, we simply produce power at power generators and consume it at customers' locations. We measure how much electricity is used by each customer and that's the extent of our data collection.
Smart grids allow for information to be generated at the customers' locations (such as current usage, historical consumption by day of the week, day of the month, month of the year, etc). This information allows us to alter electricity production based on immediate changes (as well as more detailed forecasting) to react to demands more effectively.
In addition to improved information from other sources on the grid, a smart grid allows us to add more and more energy production points. These could include individual customers choosing to use solar or wind generation and industrial sites using excess heat from their processes to generate electricity.
Use of a smart grid will better enable producers to provide power to customers in an open market, and give power generating options directly to customers.
The risks associated with smart grids boil down to the same issues we face with all computer systems: security and stability.
With a traditional grid, so long as the power plant is working and the power lines and infrastructure are physically intact, the users have power. Adding computers to the system might mean that the power could be disabled due to a computer system interruption.
The benefits are enormous, so they're certainly worth the risk. However, those risks need to be managed. For example, we need to ensure that any computer systems that are tied to the smart grid, whether at a customer's location or a power generating point, meet certain security requirements and be easily updated to protect against future security flaws. Due to the interconnected nature of the smart grid, any point of entry (a small producing station, for instance) could impact the entire grid. |
0.944776 | The Persian Gulf War, in which a coalition led by the United States drove Iraqi forces out of Kuwait in early 1991, was one of the most successful campaigns in history. At a cost of less than 300 Allied lives, coalition troops, whose military actions were largely funded by Saudi Arabia, drove out Saddam Hussein's forces. Thousands of Iraqi lives were lost in the process, however. In their victory, the coalition depended in large part on advances in military technology by the United States, whose arsenal included tools ranging from the F-117A stealth fighter to the M1A1 Abrams tank, and from the Global Positioning System (GPS) to unmanned drones and Patriot missiles. Less clearly successful was U.S. intelligence, which had failed to predict the war. Equally questionable was the ultimate outcome of the war, whose scores would not fully be settled until 12 years later.
The Persian Gulf War is sometimes called simply the Gulf War or Operation Desert Storm, after the U.S.-led campaign that comprised the bulk of the fighting. It may ultimately come to be known as "Gulf War II," or "Persian Gulf War II," with the 2003 operation in Iraq becoming the third in this series. The first, also known as the Iran-Iraq War, lasted from 1980 to 1988, and pitted the dictatorship of Saddam Hussein against the Islamic theocracy in Iran.
Both regimes had taken power in 1979, but the conflict concerned long standing disputes involving lands on the borders between the two nations. In the ensuing hostilities, most nations—including much of the Arab world, the United States, western Europe, and the Soviet bloc—supported Iraq, generally regarded as the lesser of two evils. (Both the Americans and the Soviets also gave covert support to the Iranians as well.) The war, which cost some 850,000 lives, resulted in a stalemate, and both nations built monuments to their alleged victories.
In the aftermath of the first Gulf War, analysts working for the U.S. Central Intelligence Agency (CIA) prepared a report on the likelihood of Iraqi aggression in the near future. According to the now-infamous study, Saddam had so overextended his resources in the war with Iran that he would not take any major aggressive action for at least three years. In this instance, the CIA underestimated Saddam's penchant for military adventurism.
On August 2, 1990, without advance warning, Iraqi tanks and troops rolled into neighboring Kuwait. Both nations possessed considerable oil wealth, but Kuwait was by far the richer of the two, and Iraq—particularly under Saddam's regime—had long had designs on Kuwait. Given the importance of oil from the Persian Gulf region, which at that time fueled a great part of the world, neither the United States nor the United Nations (UN) Security Council was inclined to ignore Hussien's aggressive action.
The Security Council on August 3 called for an Iraqi withdrawal, and on August 6 it imposed a worldwide ban on trade with Iraq. On August 5, President George H. W. Bush declared that the invasion "will not stand," and a day later, King Fahd of Saudi Arabia met with U.S. Defense Secretary Richard Cheney to request military assistance. Saudi Arabia, Japan, and other wealthy allies would underwrite most of the $60 billion associated with the resulting military effort. By August 8, U.S. Air Force fighters were in Saudi Arabia.
Numerous countries were involved in the military buildup during late 1990, a program known as Operation Desert Shield. By January 1991, the United States alone had some 540,000 troops, along with another 160,000 from the United Kingdom, France, Egypt, Saudi Arabia, Syria, Kuwait, and other nations. On November 29, 1990, the Security Council authorized use of force against Iraq unless it withdrew its troops by January 15. Saddam's only response was to continue building his troop strength in Kuwait, such that by the time the Allies counterattacked, he had some 300,000 men on the ground.
On January 17, 1991, Operation Desert Shield became Operation Desert Storm, which consisted largely of bombing campaigns against Iraq's command and control, infrastructure, and military assets. In retaliation, Iraq attacked Israel with Scud missiles on January 18. A great portion of the Allied losses occurred in this initial phase, when the Iraqis shot down several low-flying U.S. and British planes.
After thus severing the tail of the invading force, the Allies in February began concentrating on Iraqi positions in Kuwait. Having initially planned an amphibious landing, Allied commander General H. Norman Schwarzkopf instead opted for an armored assault. On February 24, in a campaign phase named Operation Desert Sabre, Allied troops moved northward from Saudi Arabia and into Kuwait. By February 27, they had taken Kuwait City.
At the same time, operations in Iraq itself continued. In the only major bombing run on the capital city of Baghdad, Stealth fighters struck Iraqi intelligence headquarters, while U.S. Army Special Forces teams inserted themselves deep in Iraq. In the southern part of the country, U.S. tanks pounded Iraqi armored reserve forces, while Allied ground forces neutralized Hussien's "elite"
A line of captured Iraqi soldiers are marched through the desert in Kuwait past a group of U.S. Marine vehicles during the 1991 Persian Gulf War.
Republican Guard south of Basra. President Bush declared a cease-fire on February 28.
The war had lasted 42 days, and the principal campaign, the mid-January bombing, took just over 100 hours. Credit for this extraordinary success goes to a number of factors, not least of which was strong leadership. On the military side, there was Schwarzkopf on the ground, and in Washington, General Colin Powell, Chairman of the Joint Chiefs of Staff, who served as the principal military spokesman during the war. In this, the first major U.S. action since the end of fighting in Vietnam nearly two decades earlier, the performance of both leaders and troops showed that military capabilities had improved extraordinarily since then.
Among the civilian leaders were Cheney, Secretary of State James Baker, National Security Advisor Brent Scowcroft, and President Bush. The president, sometimes criticized for a failure to communicate his aims to his subordinates or the public as a whole, was quite clear in his objectives for the Persian Gulf War. On January 15, 1991, Bush sent his principal security advisors a memorandum which outlined four major aims: to force an Iraqi withdrawal from Kuwait, to restore Kuwait's government, to protect American lives, and to promote stability and security in the Gulf region.
Another factor in the success—and another point of comparison with Vietnam—was the near-unanimous support for the action. Whereas American allies and foes alike questioned the value of the action in Vietnam, virtually no one other than Saddam's regime (along with a handful of antiwar protestors at home) opposed the U.S. effort to liberate an invaded nation. This support was helped rather than hurt by an unprecedented level of television coverage. While Vietnam became known as "the first televised war," TV reporting in the 1960s and 1970s was minimal compared to the round-the-clock reportage offered by cable outlets, most notably the Cable News Network (CNN), in 1990 and 1991.
The U.S. arsenal. While human factors deserve a great deal of credit for the success of Allied operations in the Persian Gulf War, the war would not have been won as efficiently without the technological superiority offered by modern weaponry. Among the tools in the U.S. arsenal were a variety of aircraft, including the AH-64 Apache helicopter, the leading anti-armor attack chopper. Introduced in 1984, the Apache could operate in conditions of darkness or low visibility, and was made to sustain heavy pounding from antiaircraft guns.
The E-3 Sentry AWACS (airborne warning and control system) was a masterpiece of modern technology. Packed with electronics, the aircraft—based on the Boeing 707 and introduced in 1977—was made to identify enemy aircraft, jam enemy radar, guide bombers to their targets, and manage the flow of friendly aircraft. Even more cutting-edge were the Pointer and Pioneer drones, or remotely piloted vehicles (RPVs).
Based on Israeli designs and first used by the United States during the war, the RPVs served as airborne spy platforms. The Pioneer, with a range of about 100 miles (161km) and a flight duration of five hours, could take high-definition pictures from 2,000 feet (610 meters) and transmit them to a processing center. In addition to its video cameras, it was equipped with infrared heat sensors, and provided a wealth of intelligence on everything from enemy troop movements to the recommended path for Tomahawk cruise missiles.
Other aircraft included the B-52 Stratofortress bomber, the F-117A Stealth fighter, and the E-8G JSTARS surveillance aircraft. Among the other notable weapons used in the Persian Gulf War were the M1A1 Abrams tank, the Bradley Fighting Vehicle, the MIM-104 Patriot missile defense system, and the Tomahawk cruise missile. High above the ground was the GPS, whose 24 satellites helped soldiers find their bearings in the desert, and assisted artillery in targeting.
Controversies. More controversial than the role of weapons systems was that of intelligence in the Persian Gulf War. The CIA did not inspire a great deal of confidence, either with its initial estimate of Iraqi intentions or from its August 1996 "Final Report on Intelligence Related to Gulf War Illnesses." In the wake of illnesses that broke out among returning personnel, the CIA sought to investigate the connection between these conditions and Iraqi use of chemical or biological agents. The CIA report found no evidence that Iraq had intentionally used such weapons against the United States, even though Saddam used chemical weapons against rebellious Kurds in the north.
More successful was the performance of Defense Department intelligence and related activities, both on the part of the Defense Intelligence Agency (DIA) and various military intelligence and psychological warfare units. DIA began operations in Iraq long before the war, and regularly gathered intelligence reports that proved invaluable to military leadership. The same was true of military intelligence units, while psychological operations had an immeasurable impact by coercing Iraqis to provide the Allies with intelligence on their forces' activities and capabilities.
In addition to controversies over the success of intelligence, there remained questions concerning the success of the war as a whole. This fact was symbolized by the failure of Bush—who, after the war, had the highest poll numbers of any U.S. President since scientific polling began—to gain reelection in 1992. Ironically, Saddam Hussein, who many U.S. leaders had expected to be toppled in the unrest that followed the war, remained in power despite UN sanctions and the imposition of a no-fly zone over the northern and southern portions of the country. Among the factors cited for Bush's sudden loss of popularity from mid-1991 onward (in addition to an economic slowdown and clever campaigning by challenger William J. Clinton) was his failure to remove Saddam Hussein. However, as Bush rightly noted, such action was not within his mandate from the UN.
In 1993, the CIA uncovered evidence that Saddam Hussein had attempted to assassinate Bush, in response for which U.S. warships fired 23 cruise missiles at Iraqi secret service headquarters. The years that followed saw a lengthy process of UN and U.S. attempts to find weapons of mass destruction thought to be hidden in Iraq continually thwarted by Saddam Hussein. When he evicted UN inspectors in 1998, the United States and United Kingdom launched a four-day bombing campaign, Desert Fox, against Iraq.
Although overt evidence was lacking, some in the U.S. intelligence and defense communities suspected Iraqi ties to the 1993 World Trade Center bombing, and after the 2001 destruction of those buildings, President George W. Bush indicated that the attacks had been sponsored or at least abetted by Iraq. In March, 2003, the United States launched Operation Iraqi Freedom, a land invasion of Iraq. Though many putative experts claimed that the campaign would not be as successful as the Persian Gulf War, this one—while much less popular globally—was actually shorter, and achieved something the earlier war did not: the removal of Saddam Hussein from his position of leadership. Assisting the younger Bush were several figures from the Persian Gulf War, including Cheney and Powell, now vice president and secretary of state respectively.
Allen, Thomas B., F. Clinton Berry, and Norman Polmar. War in the Gulf. Kansas City, MO: Andrews & McMeel, 1991.
Atkinson, Rick. Crusade: The Untold Story of the Persian Gulf War. Boston: Houghton Mifflin, 1993.
Clancy, Tom, and Fred Franks. Into the Storm: A Study of Command. New York: Putnam, 1997.
Dunnigan, James F., and Austin Bay. From Shield to Storm: High-Tech Weapons, Military Strategy, and Coalition Warfare in the Persian Gulf. New York: W. Morrow, 1992.
Freedman, Lawrence, and Efraim Karsh. The Gulf Conflict, 1990–1991: Diplomacy and War in the New World Order. Princeton, NJ: Princeton University Press, 1993.
Gordon, Michael R., and Bernard E. Trainor. The Generals' War: The Inside Story of the Conflict in the Gulf. Boston: Little, Brown, 1995.
Hawley, T. M. Against the Fires of Hell: The Environmental Disaster of the Gulf War. New York: Harcourt Brace Jovanovich, 1992.
Fog of War. WashingtonPost.com. < http://www.washingtonpost.com/wp-svr/inatl/longterm/fogofwar/fogofwar.ht > (April 13, 2003).
Frontline: The Gulf War. Public Broadcasting System. < http://www.pbs.org/wgbh/pages/frontline/gulf/ > (April 13, 2003). |
0.985254 | Beijing-Shanghai high-speed rail will be listed, and it is said to be the world's most profitable high-speed rail. China Railway has confirmed that it has selected brokers.
Beijing-Shanghai high-speed rail will be listed, and it is said to be the world's most profitable high-speed rail. China Railway has confirmed that it has selected brokers.
On November 8, according to Caixin.com, Mao Bingren, director of the operation and development department of China Railway Corporation, revealed that the current listing of Beijing-Shanghai High-speed Rail Co., Ltd. (referred to as Beijing-Shanghai High-speed Rail) has been selected by brokers. However, it is not certain whether it will land in Shanghai or Shenzhen, and there is no clear timetable for listing.
In today's "2018 China International Railway and Urban Rail Transit Conference", Mao Bingren also said: Beijing-Shanghai high-speed rail has achieved a profit of about 10 billion yuan in 2017, and is currently undergoing share reform and listing.
For China Railway Corporation, Beijing-Shanghai high-speed rail is one of the most valuable assets. It was approved by the State Council in 2014 and is expected to invest 220.9 billion yuan. In 2007, Beijing-Shanghai High-speed Railway Co., Ltd. was established. In 2011, the Beijing-Shanghai high-speed rail was opened. In 2014, the Beijing-Shanghai high-speed rail began to make a profit.
The Beijing-Shanghai high-speed rail has the "world's most profitable high-speed rail". Its financial status is reflected in the bond statement disclosed by Tianjin Tietou in 2016. In 2015, the company's operating income reached 23.424 billion yuan, net profit was 6.581 billion yuan, while the company's total assets were 1,815.39 billion yuan, and the asset-liability ratio was 27.74%.
In the previous:When Shi Hanbing: Lost in the House of Representatives, will Trump be dismissed by impeachment?
The next article:Real estate price cuts are being talked about? Hefei is the perfect epitome of China's property market changes! |
0.982644 | Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks.
In operation since before 1985, NTP is one of the oldest Internet protocols in use. NTP was originally designed by David L. Mills of the University of Delaware, who still develops and maintains it with a team of volunteers.
NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time (UTC).:3 It uses a modified version of Marzullo's algorithm to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100ms or more.
The protocol is usually described in terms of a client-server model, but can as easily be used in peer-to-peer relationships where both peers consider the other to be a potential time source.:20 Implementations send and receive timestamps using the User Datagram Protocol (UDP) on port number 123. They can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted.
As of June 2010, the current protocol is version 4 (NTPv4), which is a proposed standard as documented in RFC 5905. It is backwards compatible with version 3, specified in RFC 1305.
NTP uses a hierarchical, semi-layered system of time sources. Each level of this hierarchy is termed a "stratum" and is assigned a number starting with zero at the top. The number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of quality or reliability; it is common to find stratum 3 time sources that are higher quality than other stratum 2 time sources. Telecommunication systems use a different definition for clock strata.
These are devices such as atomic (cesium, rubidium) clocks, GPS clocks or other radio clocks. They generate a very accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer. Stratum 0 devices are also known as reference clocks.
These are computers whose system clocks are synchronized to within a few microseconds of their attached stratum 0 devices. Stratum 1 servers may peer with other stratum 1 servers for sanity checking and backup. They are also referred to as primary time servers.
These are computers that are synchronized over a network to stratum 1 servers. Often a stratum 2 computer will query several stratum 1 servers. Stratum 2 computers may also peer with other stratum 2 computers to provide more stable and robust time for all devices in the peer group.
These are computers that are synchronized to stratum 2 servers. They employ exactly the same algorithms for peering and data sampling as stratum 2, and can themselves act as servers for stratum 4 computers, and so on.
The 64-bit timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving a time scale that rolls over every 232 seconds (136 years) and a theoretical resolution of 2−32 seconds (233 picoseconds). NTP uses an epoch of January 1, 1900. The first rollover occurs in 2036,[note 1] prior to the UNIX year 2038 problem.
Future versions of NTP may extend the time representation to 128 bits: 64 bits for the second and 64 bits for the fractional-second. The current NTPv4 format has support for Era Number and Era Offset, that when used properly should aid fixing date rollover issues. According to Mills, "the 64 bit value for the fraction is enough to resolve the amount of time it takes a photon to pass an electron at the speed of light. The 64 bit second value is enough to provide unambiguous time representation until the universe goes dim."
A less complex implementation of NTP, using the same protocol but without requiring the storage of state over extended periods of time, is known as the Simple Network Time Protocol (SNTP). It is used in some embedded devices and in applications where high accuracy timing is not required.
All Microsoft Windows versions since Windows 2000 and Windows XP include the Windows Time Service ("w32time"), which has the ability to sync the computer clock to an NTP server. The version in Windows 2000 and Windows XP only implements Simple NTP, and violates several aspects of the NTP version 3 standard. Beginning with Windows Server 2003 and Windows Vista, a compliant implementation of full NTP is included.
On the day of a leap second event, ntpd receives notification from either a configuration file, an attached reference clock or a remote server. Because of the requirement that time must appear to be monotonically increasing, a leap second is inserted with the sequence 23:59:59, 23:59:60, 00:00:00. Although the clock is actually halted during the event, any processes that query the system time cause it to increase by a tiny amount, preserving the order of events. If it should ever become necessary, a leap second would be deleted by skipping 23:59:59.
NTP servers are susceptible to man-in-the-middle attacks unless packets are cryptographically signed for authentication. The computational overhead involved can make this impractical on busy servers, particularly during denial of service attacks.
Only a few security problems have been identified in the reference implementation of the NTP codebase in its 25+ year history. The protocol has been undergoing revision and review over its entire history. As of January 2011, there are no security revisions in the NTP specification and no reports at CERT. The current codebase for the reference implementation has been undergoing security audits from several sources for several years now, and there are no known high-risk vulnerabilities in the current released software.
Several NTP server misuse and abuse practices exist which cause damage or degradation to a Network Time Protocol (NTP) server. |
0.97374 | Why are drone flights near wildfires illegal?
As a drone pilot, capturing a unique aerial shot like a burning forest fire is naturally tempting, but the presence of UAVs while firefighters are battling wildfires have resulted in grounded helicopters because of safety concerns. Due to protocol, firefighters are forced to cease potential life-saving operations because of unauthorized drone flights. Firefighters can lose minutes or even hours over drone flights that first need to be sorted out before operations can resume.
Screenshot from the FAA’s post on Twitter. |
0.973362 | Tomorrow never comes Until It’s Too Late - Power of Now!
And how do we ‘live in the moment’?
This book has proved to be a manual for me, by reading the first chapter itself I was already more conscious of how one’s mindset, emotions and thoughts hinder our ability to live in solitude.
To summarize my main takeaway from the book in 1 sentence would be: every minute we spend worrying about what is yet to come (the future) and what has gone by (the past)- is a minute lost from what is now (the present).
Every person experiences the ‘enough is enough’ moment in his/her lives and this book is primarily about that. Eckhart teaches his readers true meaning of peace, tranquility and spirituality and how significant is to cherish the present moment.
The book itself includes various exercises for restricting our minds in the present moment and it goes beyond explaining the notion of positive thinking superficially.
We let our chaotic, troubled and egotistical minds destroy our lives and therefore, there is a dire need for us to condition ourselves to look past the pain, agony and anxiety. We must balance our lives by living each moment as it comes.
I feel anyone who feels stuck in the past, has trouble controlling their thoughts and quite simply put- anyone who feels they are struggling to feel true happiness, should read this book.
Our generation is flustered with a million thoughts- we have worried minds, a strong tendency to feel stressed about every petty issue and a flare for complaining a lot!.
Eckhart reinforces zen teachings through his narration to make the readers consciously aware of how meditation can aid one’s pursuit of solitude.
Some people may not believe in the merits of meditation and may undermine value of philosophical lessons – but believe it or not, it helps in improving focus and helping regulate pace of our day-to-day life.
You will learn how to remain consciously present in the current moment for everyday of your lives! Be more aware of your surroundings and environment.
Most importantly, you will recognize how our mind creates ‘barriers’ on its own for example emotions of pain, suffering and longing as means of identifying one’s self to the past or yearning for the future rather than just living in the moment.
As individuals we must train ourselves to disassociate with our problems and ego thereby curtailing chaotic, negative emotions to bombard our mindset.
Similarly, this book teaches how one engage in active listening to focus on what someone is saying rather than getting swayed away by the thoughts in our own head.
It’s essential that we start accepting the ups and downs and flaws of life- they will always be there so we must develop an ability to look beyond them and tie our happiness to the present moment.
As humans it’s in our nature to dwell on the past and worry for the future to save us.
For example, a person may feel scarred because of a past trauma, a bad breakup, loss of job or a loved one- this is how the vicious cycle begins and then he/she waits for the future to rescue them in form of a savior, a better opportunity or something that will save them.
Getting tangled in the past and waiting for the future does nothing but compromise on the present.
As Tolle suggests, the only way to put an end to one’s misery is by focusing on ‘now’. Take charge of the present moment and redirect your thoughts and energy into focusing on what can be done now to improve the situation.
Drop the weight of the past and worry of the future and re-center your emotions towards feeling what is happening in this very moment.
It you get caught in the vicious cycle, it is nothing but a bottomless pit which will only leave you with more stress and anxiety. Use lessons from the past in helping you take better decisions in the present moment.
Being ambitious and hopeful for a better future is no sin, but don’t let this turn you into a daydreamer of what ‘may’ happen so much so that it takes over your life!
It’s good to keep a check on what has happened and prepare for a stronger tomorrow but this should not come at a cost of your present.
It’s imperative that one learns to switch off, unwind and ‘feel’ true happiness. Breakaway from the constant stream of regret, anxiety and stress fueled by the preoccupation with the past and the future.
Take ownership of what you have ‘now’ and do things that matter and make a difference.
Don’t over plan and fret about things that need to be done- just get down towards accomplishing them.
Don’t create self-inflicting pain by resisting change – take each day as it comes and keep yourself grounded to your mantra of staying true to yourself and your happiness.
NextWhat is Self Education? Why Does it Matter more than Formal Education? |
0.954025 | What's the best way to handle a matching donation?
We often receive annual appeal donations that include a matching gift from the donor's employer. Is there a better way to handle these other than creating a record for the employer, and soft-crediting the match to the donor?
This is highly dependent on your development department's workflow - but I'd say what you described is the most popular approach.
If you do go this route, you may want to consider installing an extension I wrote called Auto Matching Gift. It's mostly a labor-saver: When you enter the original gift, you select the Matching Gift organization from a list. A pending contribution will be created on their record, and it will be soft-credited to the original donor.
Note that this was developed for an organization that pays for a service of all organizations that match gifts - so if you have to create the organization first, it's not much of a time-saver. Hopefully one day I (or someone else) will improve it to allow adding organizations on the fly.
What's the best way to record payments in installments? |
0.989999 | What is a standard homeowners policy?
A standard homeowners policy (HO-3 policy) will protect you in the event of fires, theft, accidents, or other disasters (flood and earthquake coverage requires additional coverage). Homeowners are often required by their lender to carry a standard policy to protect the mortgage company’s investment. It’s important to note that a standard policy is not a blank check—there’s a limit to how much you’ll be compensated. |
0.971464 | How is solid manure applied to cropland?
The most common equipment for applying solids to the land is a rear-discharge, box-type spreader equipped with beaters that broadcast the manure over a width of several feet (see Image 1).
Usually, the manure is conveyed to the beaters at the rear by slats attached at each end to a sprocket-driven chain. Some use a powered front end-gate to push the material to the beaters at the rear. To handle semisolid manure, a tight-fitting, closable rear end-gate is required.
Some spreaders have a side discharge; most of these have V-shaped hoppers and feed the material to the discharge with augers. A rotating expeller slings the material out of the discharge port. The application rate is varied by an adjustable gate opening, usually operated by a hydraulic cylinder.
Flail-type spreaders have a semicircular hopper bottom and a rotating shaft with chain-suspended hammers to fling the material from the hopper. The flail-type and the side-discharge spreaders are adapted to both semisolid and solid manure.
Image 1: Broadcasting manure on cropland.
Manure spreaders may be tractor-drawn models or they may be mounted on a truck. Most tractor-drawn spreaders are PTO operated, but some are driven from the ground wheels. Some are hydraulically powered for greater speed variation, especially for the apron drive, to vary the application rate. In the past, spreader capacities varied from about 30 to 400 cubic feet with tractor horsepower requirements ranging from 10 to more than 120.
Authors: Jon Rausch, Ohio State University and Ted Tyson, Auburn University. |
0.988939 | What happens when a child is given access and time to pursue his fascination with painting flowers?
In Spring, as blossoms began to appear on bushes and trees, both children and teachers would sometimes bring in a blooming branch or stem. We often placed these natural artifacts next to one of the easels (there are usually two double-sided easels in use) as an invitation. Some children painted representations of the blossoms, and others ignored them. But, one child came back over and over again throughout his play, to look closely and paint several times. Seeing this, we provided magnifying glasses, and he noticed tiny details. Other children noticed his ongoing work; they watched and often commented, supporting his enthusiasm.
A porfolio of images began to emerge as the child's confidence built, and studio materials began to be seen for all the possibilites that they might offer, a validation of our belief in the environment as the third teacher, and the possibilities of materials. |
0.999998 | My sister and I put this outfit together for a hipster boat cruise she was going on.
This skirt from Forever 21 is an absolute knock-out choice for virtually any occasion involving alcohol. It makes me want to say ... "Oh, na na. What's my name." - If you catch my drift.
We paired it with a simple, racerback silk tank and a casual jean jacket for when the sun set on the water.
The punch of colour added by the adorable Marc By Marc Jacobs cross-body bag, paired with the heavy dose of necklace artillery add just enough quirky polish to the otherwise-casual outfit.
Personally, I can't WAIT to re-wear this skirt with a white silk tank, black blazer and platform pumps for a downtown-for-drinks kind of night.
Now, take me to my yacht!
I apologize in advance, but I have to.
This skirt screams one thing and one thing only. TRIBAL. I think the last sentence should read: Now, get me some turquoise jewelry! |
0.98042 | Accused of cheating in college? Top 3 tips from an attorney for students.
Students who are accused of cheating in college may feel pressured to try and resolve the issue quickly. Before doing that, read a defense attorney for students top 3 tips if you are accused of cheating.
Be careful about what you say or write to university staff - Think before you speak. Does your defense make sense, is there any other information you should use to support your position.
Read your student handbook - Look for the honor code of academic integrity section. This will outline the process you may face if a professor thinks you cheated.
The university is not always your friend - anything you say or write in an attempt to make the situation go away may be used against you in an academic committee hearing. |
0.995138 | Summary In Wireshark 2.4.0 to 2.4.1, 2.2.0 to 2.2.9, and 2.0.0 to 2.0.15, the DMP dissector could crash. This was addressed in epan/dissectors/packet-dmp.c by validating a string length.
description New version 2.4.2, fixes CVE-2017-15189, CVE-2017-15190, CVE-2017-15191, CVE-2017-15192, CVE-2017-15193, CVE-2017-13764, CVE-2017-13765, CVE-2017-13766, CVE-2017-13767 Note that Tenable Network Security has extracted the preceding description block directly from the Fedora update system website. Tenable has attempted to automatically clean and format it as much as possible without introducing additional issues.
description The version of Wireshark installed on the remote MacOS(X) host is 2.2.x prior to 2.2.10. It is, therefore, affected by multiple denial of service vulnerabilities in the DMP, BT ATT and MBIM dissectors. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
description The version of Wireshark installed on the remote MacOS(X) host is 2.0.x prior to 2.0.16. It is, therefore, affected by a denial of service vulnerability in the DMP dissector. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
description This update for wireshark fixes the following issues: Wireshark was updated to 2.2.10, fixing security issues and bugs : - CVE-2017-15191: DMP dissector crash (wnpa-sec-2017-44) - CVE-2017-15192: BT ATT dissector crash (wnpa-sec-2017-42) - CVE-2017-15193: MBIM dissector crash (wnpa-sec-2017-43) Note that Tenable Network Security has extracted the preceding description block directly from the SUSE security advisory. Tenable has attempted to automatically clean and format it as much as possible without introducing additional issues.
description This update for wireshark to version 2.2.11 fixes several issues. These security issues were fixed : - CVE-2017-13767: The MSDP dissector could have gone into an infinite loop. This was addressed by adding length validation (bsc#1056248) - CVE-2017-13766: The Profinet I/O dissector could have crash with an out-of-bounds write. This was addressed by adding string validation (bsc#1056249) - CVE-2017-13765: The IrCOMM dissector had a buffer over-read and application crash. This was addressed by adding length validation (bsc#1056251) - CVE-2017-9766: PROFINET IO data with a high recursion depth allowed remote attackers to cause a denial of service (stack exhaustion) in the dissect_IODWriteReq function (bsc#1045341) - CVE-2017-9617: Deeply nested DAAP data may have cause stack exhaustion (uncontrolled recursion) in the dissect_daap_one_tag function in the DAAP dissector (bsc#1044417) - CVE-2017-15192: The BT ATT dissector could crash. This was addressed in epan/dissectors/packet-btatt.c by considering a case where not all of the BTATT packets have the same encapsulation level. (bsc#1062645) - CVE-2017-15193: The MBIM dissector could crash or exhaust system memory. This was addressed in epan/dissectors/packet-mbim.c by changing the memory-allocation approach. (bsc#1062645) - CVE-2017-15191: The DMP dissector could crash. This was addressed in epan/dissectors/packet-dmp.c by validating a string length. (bsc#1062645) - CVE-2017-17083: NetBIOS dissector could crash. This was addressed in epan/dissectors/packet-netbios.c by ensuring that write operations are bounded by the beginning of a buffer. (bsc#1070727) - CVE-2017-17084: IWARP_MPA dissector could crash. This was addressed in epan/dissectors/packet-iwarp-mpa.c by validating a ULPDU length. (bsc#1070727) - CVE-2017-17085: the CIP Safety dissector could crash. This was addressed in epan/dissectors/packet-cipsafety.c by validating the packet length. (bsc#1070727) Note that Tenable Network Security has extracted the preceding description block directly from the SUSE security advisory. Tenable has attempted to automatically clean and format it as much as possible without introducing additional issues.
description The version of Wireshark installed on the remote Windows host is 2.2.x prior to 2.2.10. It is, therefore, affected by multiple denial of service vulnerabilities in the DMP, BT ATT and MBIM dissectors. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
description The version of Wireshark installed on the remote Windows host is 2.0.x prior to 2.0.16. It is, therefore, affected by a denial of service vulnerability in the DMP dissector. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
description The version of Wireshark installed on the remote Windows host is 2.4.x prior to 2.4.2. It is, therefore, affected by multiple denial of service vulnerabilities in the DOCSIS, RTSP, DMP, BT ATT and MBIM dissectors. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
description wireshark developers reports : In Wireshark 2.4.0 to 2.4.1, the DOCSIS dissector could go into an infinite loop. This was addressed in plugins/docsis/packet-docsis.c by adding decrements. In Wireshark 2.4.0 to 2.4.1, the RTSP dissector could crash. This was addressed in epan/dissectors/packet-rtsp.c by correcting the scope of a variable. In Wireshark 2.4.0 to 2.4.1, 2.2.0 to 2.2.9, and 2.0.0 to 2.0.15, the DMP dissector could crash. This was addressed in epan/dissectors/packet-dmp.c by validating a string length. In Wireshark 2.4.0 to 2.4.1 and 2.2.0 to 2.2.9, the BT ATT dissector could crash. This was addressed in epan/dissectors/packet-btatt.c by considering a case where not all of the BTATT packets have the same encapsulation level. In Wireshark 2.4.0 to 2.4.1 and 2.2.0 to 2.2.9, the MBIM dissector could crash or exhaust system memory. This was addressed in epan/dissectors/packet-mbim.c by changing the memory-allocation approach.
description The version of Wireshark installed on the remote MacOS/MacOSX host is 2.4.x prior to 2.4.2. It is, therefore, affected by multiple denial of service vulnerabilities in the DOCSIS, RTSP, DMP, BT ATT and MBIM dissectors. An unauthenticated, remote attacker can exploit this by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number. |
0.999999 | A: I got locked out. B: Oh, how did it happen?
A: I locked myself out. |
0.9995 | Pre-eclampsia and risk of cardiovascular disease and cancer in later life: systematic review and meta-analysis.
To quantify the risk of future cardiovascular diseases, cancer, and mortality after pre-eclampsia.Systematic review and meta-analysis.Embase and Medline without language restrictions, including papers published between 1960 and December 2006, and hand searching of reference lists of relevant articles and reviews for additional reports.Prospective and retrospective cohort studies were included, providing a dataset of 3,488,160 women, with 198,252 affected by pre-eclampsia (exposure group) and 29,495 episodes of cardiovascular disease and cancer (study outcomes).After pre-eclampsia women have an increased risk of vascular disease. The relative risks (95% confidence intervals) for hypertension were 3.70 (2.70 to 5.05) after 14.1 years weighted mean follow-up, for ischaemic heart disease 2.16 (1.86 to 2.52) after 11.7 years, for stroke 1.81 (1.45 to 2.27) after 10.4 years, and for venous thromboembolism 1.79 (1.37 to 2.33) after 4.7 years. No increase in risk of any cancer was found (0.96, 0.73 to 1.27), including breast cancer (1.04, 0.78 to 1.39) 17 years after pre-eclampsia. Overall mortality after pre-eclampsia was increased: 1.49 (1.05 to 2.14) after 14.5 years.A history of pre-eclampsia should be considered when evaluating risk of cardiovascular disease in women. This association might reflect a common cause for pre-eclampsia and cardiovascular disease, or an effect of pre-eclampsia on disease development, or both. No association was found between pre-eclampsia and future cancer. |
0.999978 | Золото продолжило свое головокружительное снижение. Сегодняшнее падение золота как раз пришлось к границе флага (Д1), о котором я уже не раз писала. Дальнейшее заманчивое падение может привести золото к спаду в район уровней 1600 – 1575, хотя конечно это только пока грубые прикидки. Однако, и разворот пока очень даже возможен, и тут вполне возможно формирование двойного дна на том же графике Д1 и с целями также заоблачными в районе 1850.
Но не будем забегать настолько вперед. За прошедший день золото буквально рухнуло к моей последней цели в селл уровню 1694,70, где и застряло до конца суток, устанавливая очередные ключевые границы. И теперь ключевым сопротивлением становится уровень 1701,94, ключевой поддержкой 1689,84, целями в бай становятся уровни – 1704,74, 1708,72, 1710,81, 1714,08, 1717,72, 1724,21, дальнейшими целями в селл – 1679,95, 1674,80, 1669,46, 1664,80.
Также стоит отметить, что на повестке текущей недели еще остаются не решенными проблемы в ЕС (исключая конечно греческий и испанский вопросы), публикация решений по дальнейшей денежно-кредитной политике в ЕС и Британии, а также публикация данных по безработице США. Все эти данные имеют довольно серьезно вес на рынке форекс.
The auction will be held beats by dre in October was michael kors outlet held in Suffolk, England. In addition to Mourinho, many of beats by chanel outlet dre the biggest names christian louboutin outlet beats by beats by dre michael kors outlet dre in beats chanel outlet by dre football have also involved cheap nfl jerseys Robson Foundation's fundraising activities. Manchester beats by dre ugg United Sir Alex chanel beats by dre outlet Ferguson will beats by dre offer the christian louboutin outlet VIP tickets for the auction, also gave Pele autographed autobiography. With chanel outlet the north face outlet creation of virtual planet of beats by dre christian louboutin outlet the world wide beats by dre web, it michael kors outlet is michael kors outlet fairly easy to get beats by dre beats by dre you the very best deals christian louboutin outlet of christian louboutin beats by dre outlet this kind of trendy moncler outlet apparels. beats by dre beats michael kors outlet by dre All that moncler outlet you ought to do ugg is, north face outlet pay ugg chanel outlet a visit to michael kors outlet the web site of Moncler on-line michael kors outlet store. It permits you to examine the moncler outlet aggressive costs and review the ugg caliber of clothing. Search frontward discount Moncler ugg women jacket, get a wonderful sales north christian louboutin outlet face outlet activities Moncler shoes chanel beats by dre outlet look in this winter, I strongly recommend you choose Moncler. This beats by dre is north face outlet a well-known brand, most people love it. Moncler has many different styles and design. Of north face outlet michael kors outlet course, if cheap nfl jerseys michael kors outlet it is cheap nfl beats by dre jerseys designed in France with christian louboutin outlet Italy processes, American marketing process and Japanese chanel outlet fabrics and plate, it must be michael kors outlet ugg the best. If not, at moncler outlet least you can take into account moncler outlet those michael kors outlet antique brands beats by christian louboutin outlet dre and beats by dre learn from stars. It's chanel outlet michael kors outlet certainly proper to refer to the public's election.7. |
0.999962 | Water vapour is the most dominant greenhouse gas. The greenhouse effect or radiative flux for water is around 75 W/m2 while carbon dioxide contributes 32 W/m2 (Kiehl 1997). These proportions are confirmed by measurements of infrared radiation returning to the Earth's surface (Evans 2006). Water vapour is also the dominant positive feedback in our climate system and a major reason why temperature is so sensitive to changes in CO2.
Unlike external forcings such as CO2 which can be added to the atmosphere, the level of water vapour in the atmosphere is a function of temperature. Water vapour is brought into the atmosphere via evaporation - the rate depends on the temperature of the ocean and air, being governed by the Clausius-Clapeyron relation. If extra water is added to the atmosphere, it condenses and falls as rain or snow within a week or two. Similarly, if somehow moisture was sucked out of the atmosphere, evaporation would restore water vapour levels to 'normal levels' in short time.
As water vapour is directly related to temperature, it's also a positive feedback - in fact, the largest positive feedback in the climate system (Soden 2005). As temperature rises, evaporation increases and more water vapour accumulates in the atmosphere. As a greenhouse gas, the water absorbs more heat, further warming the air and causing more evaporation. When CO2 is added to the atmosphere, as a greenhouse gas it has a warming effect. This causes more water to evaporate and warm the air to a higher, stabilized level. So the warming from CO2 has an amplified effect.
How much does water vapour amplify CO2 warming? Without any feedbacks, a doubling of CO2 would warm the globe around 1°C. Taken on its own, water vapour feedback roughly doubles the amount of CO2 warming. When other feedbacks are included (eg - loss of albedo due to melting ice), the total warming from a doubling of CO2 is around 3°C (Held 2000).
The amplifying effect of water vapor has been observed in the global cooling after the eruption of Mount Pinatubo (Soden 2001). The cooling led to atmospheric drying which amplified the temperature drop. A climate sensitivity of around 3°C is also confirmed by numerous empirical studies examining how climate has responded to various forcings in the past (Knutti & Hegerl 2008).
Satellites have observed an increase in atmospheric water vapour by about 0.41 kg/m² per decade since 1988. A detection and attribution study, otherwise known as "fingerprinting", was employed to identify the cause of the rising water vapour levels (Santer 2007). Fingerprinting involves rigorous statistical tests of the different possible explanations for a change in some property of the climate system. Results from 22 different climate models (virtually all of the world's major climate models) were pooled and found the recent increase in moisture content over the bulk of the world's oceans is not due to solar forcing or gradual recovery from the 1991 eruption of Mount Pinatubo. The primary driver of 'atmospheric moistening' was found to be the increase in CO2 caused by the burning of fossil fuels.
Theory, observations and climate models all show the increase in water vapor is around 6 to 7.5% per degree Celsius warming of the lower atmosphere. The observed changes in temperature, moisture, and atmospheric circulation fit together in an internally and physically consistent way. When skeptics cite water vapour as the most dominant greenhouse gas, they are actually invoking the positive feedback that makes our climate so sensitive to CO2 as well as another line of evidence for anthropogenic global warming. |
0.999947 | What is health? It is important to first define health because headache is a symptom of some underlying cause, especially chronic headache. Headache is an indication of lack of health.
Health is the optimal functioning of the unity of body, mind and soul. It is not just absence of disease.
Physical health is just the outward representation of the health of the mind and soul.
And then there are subcategories, such as, migraine with aura and migraine without aura.
Headaches are diagnosed according to the characteristics of the pain.
Note the location of the pain, the type of pain, any exacerbating or relieving factors, what triggers the headache, the presence or absence of auras, and family history.
The factors that make you who you are: personality, likes/dislikes, the type of work you do, the level of stress you face on a daily basis and your ability to cope with it, and most importantly, your purpose in life.
The first job is to determine if your headache is primary or secondary.
Secondary does not mean that it is not as important as primary.
Secondary means that the headache is due to some underlying medical problem.
All headaches are either primary or secondary. |
0.998361 | How to make You can begin by preheating the oven to 325 degrees. Take a small bowl and add in the pecans along with the 2 tbsp of oil and salt. Scatter the pecans on a baking sheet and bake them till they turn golden and fragrant for duration of 15 minutes. Turn the nuts halfway in between the baking process, after which you can set them aside for cooling. Meanwhile, take a pot of water and bring it to a boil. Fill a bowl with ice water and blanch the peas in it for the duration of one minute. Take the peas out with a slotted spoon. Transfer the peas to the ice water. Drain off the peas and put in a serving bowl. Take another small bowl and add in vinegar, oil, garlic and basil. Season the same with salt and pepper. Pour this mixture over the peas and coat. Top up the salad with the pecans to complete the dish. It is a unique and healthy dish to serve. If you are on a diet or are trying to detox, this kind of a diet will help. For Best Chicken Soup Recipes For The Winter, you could Check Out This Link. It is important that one makes healthy food and cooking as part of their daily habit. |
0.999978 | I predict that the best temperature for the reaction to take place will be at around 40 degrees. I made this assumption on the basis that 40 degrees is the closest to body temperature, and so this would have to be the best temperature for the reaction to take place. It can also be said that below 40 degrees the enzymes will have less energy and hence will move around less.
Therefore there is less chance that they will collide into the photographic film, meaning the time that it takes for the enzyme to fully react with the photographic film will take longer. Also above 40 the enzyme will be affected by the high temperatures and will begin to denature. When an enzyme becomes denatured its active site changes shape and so it cannot break down any substances. Therefore at above 40 enzymes will be denatured, unable to break down the photographic film, and so the reaction will take longer.
However, it should be noted that this is merely a sketch and thus is not accurate in terms of scale or accuracy in comparison with the end results table.
* They reduce the amount of energy for molecules to react.
* They remain the same after chemical reaction.
* They are specific in the type of substrate molecule.
* They are affected by temperature, as is demonstrated by this experiment.
* They are affected by pH level.
Trypsin, the enzyme to be used in this experiment, is a protease and breaks down proteins and polypeptides to amino acids; it is found in pancreatic juice and is produced in the pancreas.
1. Prepare 15 wooden splints.
2. Prepare 15 test tubes in a test tube rack.
3. Measure 3cm of trypsin using a measuring cylinder, ensuring the measurement is taken from the bottom of the meniscus.
4. Place this volume into each test tube.
5. Repeat for other test tubes.
6. Cut a notch on the end of one side of all the splints.
7. Carefully place a 1cm piece of photographic film in each notch on the splints.
8. Place three test tubes in the water bath.
9. Set the thermostat on the water bath to 10C.
10. Wait 10-12 minutes for the water to settle at 10C.
11. Place a splint with photographic film into each of the test tubes.
12. Start timing using the stopwatch.
13. Wait for the film to dissolve away.
14. When the film has eventually dissolved away, stop the time and note it down.
15. Clean the test tubes and apparatus thoroughly.
16. Repeat this process from step 1 to 15 for temperatures 20C, 40C, 60C and 80C.
* Ensure the same amount of trypsin in used throughout the experiment.
* Ensure the same size of photographic film is used throughout the experiment.
* Carry out the experiment three times to ensure the results are valid and are reasonable.
* Ensure the trypsin is acclimatised before the experiment.
* Ensure the same water bath is used.
* Ensure the same test tubes are used.
* Ensure the apparatus is all clean.
* Use eye goggles to prevent the risk of enzymes getting into the eye and digesting eye tissues.
* Use an apron and gloves to reduce the risk of contact between human skin and enzymes.
* Use a safety mat in the case of any trypsin being spilt.
* It is not proven that at the time the results were recorded, the gelatine layer of photographic film was completely dissolved, it was merely visually judged; this is inaccurate since there may have been microscopic traces of gelatine left.
* The gelatine was not completely submerged in water; the splint notch was partially covering the gelatine and thus this part was not exposed to the trypsin.
* The water bath thermostat was not entirely correct since thermometer readings proved that the temperature of the water was not always consistent with the temperature on the thermostat.
* The stopwatch has only a certain number of decimal places and the addition of room for human error and inaccuracy leads to the problem of overall inaccuracy for the reading of the time for the results.
* The optimum temperature, or the peak of the graph, is not at what is should be. The actual optimum temperature of the graph should be at 37C; however, this one is at 42C. Again, this is a result of not letting the enzymes acclimatise.
* If the water bath thermostat had a greater degree of accuracy, the results would in turn also have a greater degree of accuracy.
Although there are so many possible in accuracies that may have occurred, my main hypotheses was proven correct, As we have the exact same curve in the final experiment as predicted in my plan. In terms of accuracy, the results are reliable enough to depend on to conclude whether the predictions were correct or not. The anomalies were not too extreme and thus can still be considered in taking into account as a result.
I could have made many improvements during the experiment to make it fair and equal. An example of this is the range of temperature; if the experiment was carried out at 10C intervals, there would have been a more defined and accurate graph.
By looking at my results I can conclude that the optimum temperature is near the average body temperature; 37C. This is because the enzymes are designed to work in the body where the temperature is around 37C.
We can also tell from the graph that the enzymes stop working at low temperatures, and denature at high temperatures. There were no real noticeable anomalies that can be noted as outside the conventional result and thus there were no results outside the expected results. |
0.999391 | To test the Blackmagicdesign Decklink 4k Pro card I want to use to test my open source SDI implementation photonSDI against, I connected a cheap HDMI to 3G SDI converter to a HDMI port on the computer, configured the HDMI output to 1080p60 mode (60 full frames per second with 1920x1080 pixels each) and the SDI output to the input of the Deccklink card. To transfer the 1080p60 video stream, the link has to run at 3G SDI line rate, but since both converter and capture card support 3G SDI this should work out of the box. At least that’s what I thought. Turns out that it didn’t and the capture card detected the SDI format as 1080p30, only managed to capture a frame every 5 to 6 seconds and even those frames were garbled.
When connecting the SDI output to a cheap 3G SDI to HDMI converter instead of the Decklink card, I got the expected 1080p60 signal on the HDMI output.
I still wonder if the problem is on the Decklink side or on the converter side, but I will probably find that out when I have photonSDI in a working state.
After some hours of not very successful debugging, but luckily before I started trying to build a FFMPEG version with decklink support, Kjetil in the IRC channel #photonsdi on freenode suggested that the problem might be that the devices might use an incompatible channel mapping.
The video stream is transferred in two streams that each use a HD-SDI (1.5G SDI) link each.
The video stream is transferred as one stream using the full bandwidth.
The video stream is split into two streams with HD-SDI data rate that are multiplexed over the 3G SDI link.
I include this mapping for reasons of completeness: This channel mapping allows to transfer two HD-SDI streams over one 3G SDI link; this can be used to transfer stereoscopic material.
The Decklink card only supports the channel mapping B for 3G SDI video streams while the cheap HDMI to 3G SDI converter outputs 3G SDI with channel mapping A.
When configuring the HDMI output of the computer to 1080p30, the converter outputs a HD-SDI video stream that the Decklink card can capture.
TL;DR: Beware that there are different incompatible 3G SDI channel mappings and not every device supports all of them. |
0.963947 | The Culture of Europe might better be described as a series of overlapping cultures of Europe. Whether it be a question of West as opposed to East; Catholicism and Protestantism as opposed to Eastern Orthodoxy; Christianity as opposed to Islam; many have claimed to identify cultural faultlines across the continent.
Europe has been a cradle for many cultural innovations and movements, such as Humanism, that have consequently been spread across the globe. The Renaissance of classical ideas influenced the development of art and literature far beyond the confines of the continent.
One of the major problems in defining the European culture, is where does Europe start and where does it end ? Most countries share common historical experiences, but several important faultlines appear. The first one is the dividing lands that were occupied at some point by the Roman Empire, thus dividing Europe along a line that goes through Hadrian’s Wall in the British Isles, along the Rhine and finally along the Danube. Another faultline is the Catholic-Orthodox divide caused by the Great Schism, which isolates Russia, Belarus, half of Ukraine (whether Uniate Ukraine is considered Orthodox or Catholic is a matter of debate) and Serbia.
Yet another faultline is the one that separates the lands once occupied by the Ottoman Empire and the ones that weren’t, which created the current Christian-Islam faultline, that separates Albania, Bosnia and Turkey. Also notable is the faultline that separates the parts of Europe that went through industrialization in the 19th century, including Northern Italy and . And finally, the most recent faultline is the infamous Iron Curtain. These faultlines are key to understanding the cultural similarities and differences in Europe.
They are also important for identifying what countries should be admitted into the European Union (such as in the case of Turkey or the 2004 separatist menace in Ukraine). Thus the question of “common culture” or “common values” is far more complex than it seems. |
0.947502 | Temporal and spacial separation of Speakers will invariably introduce changes to meaning of words, changes in pronunciation and eventually completely different languages. Which means that within a splinter group an individual need to introduce a subtle change to initiate the deviation process. This means that the individual conception of the semantics and pronunciation of an word is a (main) agency of change for the common language.
Through the process of maturation an individual will frequently need to change their understanding of common language words. Not only the semantic content, but also emotional connotation and personal preferences are attached to concepts within a person's mind. When a concept is invoked by common language interaction the entire individual history of contact with that concept is invoked. These histories are, per concept, unique to individuals.
The implication is that common language is the consensual 'overlap' of individual private languages. This would seem to fall prey to Wittgenstein's stipulation that a private language should in principal not be translatable. However, while we may use wildly different mental faculties to 'process' a certain concept, when we even as much as think about communicating it we immediately invoke translation to common language. Therefore access to a private language is in principal impossible.
Is there any philosophers that have argued against the impossibility of a private language and what is their arguments?
Sappir-Wharf has basically no academic credibility.
Wittgenstein argued that in so far as a system of symbols is private, they are not part of a language. Language is public, community use of symbols. This is built up from a picture theory of language, from an attempt to share mental models. For a mutation of a language to 'take', it is not the 1st initiation of that change that matters, but it being taken up by the community.
Qualia are pretty suspect in this view. Synaesthesia is an interesting case, with for instance high functioning mathematical savantism seems to be related to a kind of applied synaesthesia for memorising number properties and connections. In music or poetry, what is heard or the message taken away, often have very little to do with the artists intentions or mind during the creative process, yet things are still communicated, both intentionally and unintentionally - great songs frequently allow a multitude of readings to be projected.
Wittgenstein saw language as developing from a process of game playing, and that languages aren't fixed sets of symbols, but emergent sets of language-games. Each participant offers up behaviours, and people either engage back or not, iterate, alter. And various language-games are more or less formal. This is a way more flexible and versatile model, able to accomodate for instance people from entirely different cultures or species never in contact attempt to begin communication using gestures and body language.
Owen Roger Jones, The Private Language Argument (Controversies in Philosophy), ISBN 10: 0333105109 / ISBN 13: 9780333105108 Published by Macmillan / St Martin's Press, 1971.
Warren B. Smerud, Can There be a Private Language?: An Examination of Some Principal Arguments, published by Mouton & Co., The Hague, The Netherlands, 1970.
Not the answer you're looking for? Browse other questions tagged reference-request philosophy-of-mind philosophy-of-language language or ask your own question.
when we express in words can it be anything else but an opinion based on learning and experience? |
0.999927 | In the simple past tense, sentences have the following structure.
I did not see him. He did not go to the market.
Did he go to the market?
3. She ---------------------- (not move).
4. We ------------------------- (start) in the morning.
6. She --------------------- (start) teaching at 19.
7. I ----------------------- (not say) anything to offend him.
8. The allegations ---------------------- (force) her to quit her job.
9. The man ---------------------- (leave) in a hurry.
10. She ---------------------- (want) to leave.
11. I ----------------------- (not understand) a word.
12. Susie ----------------------- (go) to the movies with her friends.
13. She -------------------------- (order) a pizza.
14. He ----------------------- (eat) nothing.
1. I saw your father yesterday.
2. He said nothing about your plans.
3. She did not move.
4. We started in the morning.
5. I waited for him for two hours.
6. She started teaching at 19.
7. I did not say anything to offend him.
8. The allegations forced her to quit her job.
9. The man left in a hurry.
10. She wanted to leave.
11. I did not understand a word.
12. Susie went to the movies with her friends.
13. She ordered a pizza. |
0.999998 | This is a member-owned resource, and with proper training can be used with permission.
Preparing a design for printing - "Slicers"
Training for each printer will provide specifics on how to prepare a file for printing, but all leverage software that controls how the printer creates layers or "slices" that make up the object. The general term for this type of software is a "slicer".
Simplify3D - A Commercial slicer that works with many printers.
1) How long does training take?
The Basic Training for the 3D Touch printer takes less than 30 minutes. At this point you'll be able to print most items which do not have any special considerations.
2) What do you mean "special considerations"?
3) What does it cost to print something?
For the extrusion based printers, nothing, but if you're using a lot of filament you should donate to the space to help cover filament costs so we don't run out. Remember that many other spaces charge by the hour or the gram; we don't want to have to do that.
As a cost guideline, during training you'll see how the software automatically calculates both the build time and the materials cost. This can be used to give you an idea of how much your print is costing the space.
4) How long does it take to print something?
It's not quick. Large prints can take hours. So don't use the 3D printer to make something that could easily be cut on the Laser... it's not an efficient use of your time. Build time estimates are provided by the software, as you'll see in class, so you'll know ahead of time how long it will take.
5) Do I have to reserve the printer?
There is currently no reservation system for the 3D Touch printer. If you have a specific need or availability, contact the resource managers and something may be possible.
6) Can I print multiple colors or materials?
The 3D Touch is a dual-extruder model, and can print two different materials (or different colors of the same material). If you have not taken Advanced training for the 3D Touch, you are not authorized to change the filament or filament settings; Contact a resource manager for help.
The Zprinter 450 prints in full color, but only one material (plaster).
7) I want to print in (odd color here) - Do we have that?
Assume the answer is "no". No special consideration for material color is available unless material has been donated, so if you have a specific need your best bet is to contact the Resource Managers and discuss the need. They will then help you find compatible material that you can purchase and use on the printer, with the help of a qualified individual to change the filament and settings.
8) I want to print in (random filament type I read about on the interwebz). Can I do that?
Contact the resource manager(s). If it's compatible and you supply the material, it may be possible... but don't buy any special filament before talking to the resource manager(s)!
9) Do I need to be physically in the space when printing?
Yes, or have someone watching the print for you. It does not need to be watched closely - just check at reasonable intervals to make sure nothing's going wrong. That way if something is wrong you can stop the job before either filament is wasted or the printer is damaged.
The Zprinter 450 can be run unattended at your own risk. You will still be charged for failed prints, unless the failure is due to an equipment issue (ex: machine breaks down mid print). |
0.963366 | Does your sushi contain parasites?
The increase in the popularity of eating raw fish in sushi and sashimi dishes has been blamed for a parallel increase in cases of anisakiasis. This painful stomach infection is caused by a group of parasitic round worms belonging to the genus Anisakis.
Eating raw or undercooked flesh can be hazardous as this is the route by which several groups of parasites gain access to humans. These include tapeworms and roundworms that take up residence in our guts or burrow through into our blood system and are carried round to invade other organs of our body.
Many people are aware that beef and pork could be contaminated with the larvae of tapeworms, especially in countries where sanitation standards are poor, but fish are not considered a source of parasitic infection so often. However there are examples of both tapeworms and roundworms whose larvae live in the muscles of fish and develop into mature worms in animals that feed on infected fish, including humans. One such group of parasites are the anisakid nematodes.
These nematodes have a complex life cycle with several hosts.
Adult worms live in the intestine of marine mammals. Here they feed, grow and mate to produce eggs. The eggs pass out with the faeces and when the eggs reach saltwater larvae develop whilst still inside the egg. Here they moult to second stage larvae then the eggs hatch. If the free swimming larvae are then eaten by their first hosts, which are crustaceans, they moult again to third-stage larvae.
They move up the food chain if their crustacean hosts are predated by squid or marine fish and here they migrate through the gut wall and form a cyst on the wall of organs or in the muscle. At this stage they are coiled in appearance and about 2cm long. Their final journey occurs if these hosts are then eaten by marine mammals such as dolphins or whales and the cycle is completed. The larval stage infective to marine mammals can also infect humans.
Infective larvae have been found in wild salmon, mackerel, herring and halibut as well as squid. In addition to eating raw or undercooked fish in dishes such as sushi or sashimi, people can be infected by eating lightly pickled or salted fish such as fermented herring in the Netherlands, marinated anchovies in Spain or cod livers in Scandinavia.
To avoid infection, fish need to have been frozen to below -20°C for several days or cooked at a temperature of at least 60°C. Many countries require fish to be frozen before sale to avoid the danger of infection, especially as most domestic freezers do not reach these low temperatures.
Anisakiasis, also known as herring worm disease, is caused by Anisakis simplex. It was first recognised in the 1960s and predominately occurs in countries were raw or undercooked fish forms a large part of the diet. More than 1,000 cases are reported annually in Japan.
The Anisakis larvae try to burrow into the wall of the stomach or intestine but the thick wall prevents complete penetration and the larvae usually die there causing an immune response that produces a mass of tissue that could block the intestine. Symptoms include acute abdominal pain, nausea, vomiting, and mild fever. Allergic reaction such as rash and itching can occur and infrequently the response can cause anaphylaxis.
A recently published case study in the British Medical Journal reported the removal of a roundworm from the stomach of a man who had been suffering with mild fever, vomiting and acute stomach pain.
The worm was discovered during an endoscopy examination attached to a swollen area of the stomach lining and removed with a Roth net. These nets are usually used to remove polyps or foreign bodies from the stomach. The worm was identified as a member of the genus Anisakis.
Once the worm had been removed the patient’s symptoms disappeared. This patient was fortunate, if the worm is not removed symptoms such as peritonitis can occur or an immune response could have caused anaphylaxis.
Is the risk of infection really increasing?
An excellent study, published in 2005, assessed the risk of acquiring fish or wild-meat -borne infection in Asia concluded that the apparent increase in infection may be due to better diagnosis.
The authors found the type of fish used in the preparation of dishes in Japanese restaurants and sushi bars were usually parasite free but warned that fish, wild-meat and squid sold in markets in rural areas for home consumption or for local restaurants or street food shops were heavily contaminated. They suggested travelers to Asia should be made aware for the risk of eating raw or undercooked dishes.
In contrast, recommendations by European Union regulatory bodies and the US Food and Drug Administration concerning the freezing of fish prior to consumption should, in general, protect diners in Japanese restaurants and sushi bars in these areas. So perhaps lovers of sushi in the West can relax.
Could carbamate insecticides save the day?
and readinhg very informative articles or reviews here.
I’m trying to find out if I am infested (or infected) with helminth(s).
I swallowed a morsel of Atlantic salmon flesh, whilst slicing a whole fish, ready for freezing. After removing a fish steak, I discovered a heminth in the flesh. It was, in fact, two worms attached to each other; a plump one and a slender individual. They had crawled out of a tunnel they had dug out of the fish’s flesh, whilst being frozen. I am scared that I swallowed eggs if there were any. I don’t think I ate a worm. I could not identify the worms. I don’t know if they are nematodes or cestodes. One of them appeared to have a scolex or similar appendage. The only symptoms I observed was a change in texture of my excrement. I went to my family doctor (GP), but she didn’t prescribe anthelmitic drugs. Can anyone help. I worry. |
0.999997 | It seems that the Alaskan Claptrap, Sarah Palin, has decided to chime in on the newest release of classified documents by a Swedish-based website run by Australian Julian Assange - the website goes by the name "Wikileaks."
Using the worlds most sophisticated communication service, Twitter, Palin decided to send out the following message of disapproval to her fans: "Inexplicable: I recently won in court to stop my book 'America by Heart' from being leaked, but US Govt can't stop Wikileaks' treasonous act?"
Sarah Palin's dual claims are interesting. First, she claimed that she won a court battle involving the leak of her book, and second, she claimed that what Wikileaks did was "treasonous."
In regards to Palin's leaked book, HarperCollins, the publisher of Sarah Palin's book "America By Heart: Reflections on Family, Faith, and Flag," had reportedly settled with Gawker.com - the website that leaked several pages of Palin's book. While the terms of the settlement have not been disclosed at this time, Gawker editor Remy Stern commented on the settlement stating the leak probably bolstered Palin's book sales.
"[It] generated a good deal of press for Ms. Palin's book in advance of its publication . . . Now that the book is out and destined to appear on the best-seller list, we're pleased that HarperCollins proposed settling this case as is, thus avoiding lengthy litigation for both sides," Stern noted.
So, given the fact that HarperCollins, not Palin, settled with Gawker.com, it would appear Palin's claim that she single-handedly stopped her book from reaching the internet seems to be false.
Now onto Palin's second claim that Wikileaks committed a "treasonous act" by releasing the numerous documents.
In case you missed the first paragraph of this article, let me reiterate an important fact - Wikileaks is not American. The website is hosted in Sweden and it is run by an Australian. Unless Sweden and Australia are one of the 57 states Palin thanked in her retaliation against her North Korean gaffe, then it looks like Palin followed up her last gaffe with yet another bigger gaffe.
In addition to her twitter comments, Palin also took to her other preferred method of communication - Facebook. Palin released yet another note blasting the administration for failing to act after the first Wikileaks release and urging the government to pursue Wikileaks founder as a terrorist - a position top Republicans wish to do. Palin claims Assange is an "anti-American operative with blood on his hands," but she, nor any other critic of Wikileaks, have been able to prove any recent event stemmed from the first leak of information.
While designating Wikileaks a terrorist organization is an interesting way to deal with the embarrassment of diplomatic cables being released to the public, it is a far more dangerous action, then say, anything Glenn Beck claimed "communist" Obama has done or plans to do. The GOP could potentially label any group they disagree with as a "terrorist organization" and act with virtual immunity.
Palin pondered just what exactly America did to prevent these leaks from happening, raising a couple questions of her own.
Does Palin even understand what she suggests?
NATO, or The North Atlantic Treaty Organization, is an intergovernmental military alliance (thank you Wikipedia) where member states agree to a mutual defense in respond to an attack from an outside party. Did Wikileaks launch a military attack against America, and would Palin suggest NATO allies march troops into Sweden to take down Assange's servers?
Does Palin not realize that Sweden is militarily neutral? Would she suggest action similar to that taken by Adolf Hitler during World War II?
Let's go back to Palin's "treasonous acts" comment for a second.
Wouldn't a person have to be a citizen of a nation first in order to commit a treasonous act against that nation, and if it was a treasonous act, then why would Palin want to involve foreign nations in the dealing of a domestic problem? Why would NATO respond to an act of treason? If that was the case, would Palin support foreign troops on American soil?
Palin also asks if individuals working for Wikileaks were investigated? Investigated by who? I think Sweden may be outside United States jurisdiction, or maybe Palin is trying to prolong the spirit of the Bush Doctrine - you know, that thing that was the subject of that "gotcha" question asked by Charlie Gibson, which involves such concepts as preventative war and the right for America to secure itself against countries that harbor or give aid to terrorist groups. Being that Palin claims Julian Assange is an anti-American operative, would Palin argue America has the right to enter European countries that are complicit with Assange's actions, like say, hosting his websites or allowing Assange to take up residence?
Palin also wants the assets of anyone involved in this most recent leak frozen. Being that she believes Assange committed a treasonous act, would she then afford these individuals the right to due process - the same due process the right wingers felt the federal government deprived computer pirates and counterfeiters of when they seized thousands of piracy websites?
Though it remains to be seen whether the government will pursue legal action against WikiLeaks, precedent indicates it's unlikely.
In September, the Congressional Research Service released a report concluding that, "leaks of classified information to the press have only rarely been punished as crimes, and we are aware of no case in which a publisher of information obtained through unauthorized disclosure by a government employee has been prosecuted for publishing it."
From SP post on FB: The White House has now issued orders to federal departments... to take immediate steps to ensure that no more leaks like this happen again. ... But why did the White House not publish these orders after the first leak back in July? What explains this strange lack of urgency on their part?
This is clearly meant to imply that more urgent action on the part of the WH would have prevented this second leak. Problem? It appears that Manning leaked the documents all at once; WikiLeaks chose to stage the release.
July 2010 - WikiLeaks publishes 92,000 pages of U.S. military memos termed the Afghan War Diaries.
October 2010: Some 400,000 pages on the Iraq War that the Pentagon called “the largest leak of classified documents in its history” are posted by WikiLeaks.
While I'll be the first person to admit Palin is incapable of doing the proper research to determine whether what she wants to say is applicable in the situation she is writing about, not ALL of her followers can be this gullible. How can some of them not see just how ignorant this woman is about the justice system works?
Palin's solution to every situation she encounters is to react. Whether it is something that personally affects her as in her North Korea gaffe or if it is in reference to the Wikileaks debacle, which she uses for her own personal tool to attack our President again, makes no difference to her. She just has to react. It is a calling with her. She has this overwhelming desire and need to speak out in order to attract attention. In fact, she demands attention, craves attention, like an addict craves drugs. What is most important is that we don't have to react to her. Every time we do, we fulfill her need for attention. If we treat her in the manner she is most deserving of, we'd ignore everything she said. It's difficult because this woman's ego is so overwhelmingly large, her saturation of the media with all things Palin has resulted in an assault on us whether we're reading the newspaper, watching TV or perusing the internet. We need to be able to use a trait that she shows no sign of having, we need to use our self-control to ignore her blatant hypocrisy, lies and attempts at manipulating the media at every turn. Only in ignoring her will we win and in doing so, it will prove to the media that she is no longer a page turner, nobody is clicking on links that highlight her latest mishap, we're not watching her program on TLC. In other words, let them know with our actions that we're not interested because they are not paying attention to our words.
I agree that Palin needs to be ignored, but a problem with that is by ignoring people like Palin, it perpetuates the Spiral of Silence. By remaining quiet, it gives the perception that these people represent the majority opinion, which is definitely not true, and by ignoring them things will only get worse.
I thought your description of Palin was very interesting: "She has this overwhelming desire and need to speak out in order to attract attention. In fact, she demands attention, craves attention, like an addict craves drugs."
It made me think of the disorder Munchausen Syndrome. |
0.999937 | How many plants should we grow, to have enough food for our household, but not the whole neighbourhood?
how much space you have in your garden, etc.
As well, it depends on how much you like to eat fresh produce! Can you eat a pint of cherry tomatoes a day, in season? Or, you’re good with a pint a week?
Therefore, there isn’t a cut and dried answer that will suit every household, but there are some guidelines that can help you in your planning.
Here are some estimates for some of the most popular crops, and how many plants will be required for a household of 2-3 people.
Zucchini: 1-2 plants can provide an abundance of zucchini for a household; harvest every 2-3 days for optimal-sized zucchini. If there’s one vegetable to not over-plant, it’s zucchini; most households have enough with 1-2 plants, unless they really love it, or want some to preserve for the winter (which is a good idea).
Spinach: 10 feet of spinach, with seeds spaced ~ 4-5 inches apart, can provide enough for fresh salads for a household; plant every 2 weeks in the summer, or every 4 weeks in the spring and late summer, for a continuous supply. It grows and germinates best in cool weather, so it may be best to plant it mostly in the spring and fall.
*More details to come, regarding succession planting (aka: how often to plant crops for a continuous supply, and the earliest and latest dates each crop can be planted). |
0.989691 | While many of the projects we’ve covered build on the web as we know it or operate like the browsers we’re familiar with, the Aragon project has a broader vision: Give people the tools to build their own autonomous organizations with social mores codified in smart contracts. I hope you enjoy this introduction to Aragon from project co-founder Luis Cuende.
I’m Luis. I cofounded Aragon, which allows for the creation of decentralized organizations. The principles of Aragon are embodied in the Aragon Manifesto, and its format was inspired by the Mozilla Manifesto!
We are in a key moment in history: Technology either oppresses or liberates us.
That outcome will depend on common goods being governed by the community, and not just nation states or corporate conglomerates.
For that to happen, we need technology that allows for decentralized governance.
Thanks to crypto, decentralized governance can provide new means of organization that don’t entail violence or surveillance, therefore providing more freedom to the individual and increasing fairness.
With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos.
Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure.
Aragon is at the intersection of social, app platform, and blockchain.
The Aragon app is one of few truly decentralized apps. Its smart contracts and front end are upgrade-able thanks to aragonOS and Aragon Package Manager (APM). You can think of APM as a fully decentralized and community-governed NPM. The smart contracts live on the Ethereum blockchain, and APM takes care of storing a log of their versions. APM also keeps a record of arbitrary data blobs hosted on decentralized storage platforms like IPFS, which in our case we use for storing the front end for the apps.
The Aragon app allows users to install new apps into their organization, and those apps are embedded using sandboxed iframes. All the apps use Aragon UI, therefore users don’t even know they are interacting with apps made by different developers. Aragon has a very rich permission system that allows users to set what each app can do inside their organization. An example would be: Up to $1 can be withdrawn from the funds if there’s a vote with 51% support.
To create an Aragon app, you can go to the Aragon Developer portal. Getting started is very easy.
First, install IPFS if you don’t have it already installed.
Here we will show a basic counter app, which allows members of an organization to count up or down if a democratic vote happens, for example.
aragon run takes care of updating your app on APM and uploading your local webapp to IPFS, so you don’t need to worry about it!
You can go to Aragon’s website or the Developer Portal to learn more about Aragon. If you are interested in decentralized governance, you can also check out our research forum.
If you would like to contribute, you can look at our good first issues.
If you have any questions, please join the Aragon community chat!
Luis is CEO of Aragon One, one of the teams working on the Aragon project. Luis was awarded as the best underage European programmer at the age of 15 and has been listed in Forbes 30 Under 30 and MIT TR35. He cofounded the blockchain startup Stampery and has been into crypto since 2011. His first open source project was a Linux distribution focused on UX. |
0.967007 | The American Football League (AFL) was a major professional American football league that operated for ten seasons from 1960 until 1969, when it merged with the older National Football League (NFL). The upstart AFL operated in direct competition with the more established NFL throughout its existence. It was more successful than earlier rivals to the NFL with the same name, the American Football League (1926), American Football League (1936), American Football League (1940), and the later All-America Football Conference [(1944–1950), played 1946–1949].
This fourth version of the AFL was the most successful, created by a number of owners who had been refused NFL expansion franchises or had minor shares of NFL franchises. The AFL's original lineup consisted of an Eastern division of the New York Titans, Boston Patriots, Buffalo Bills, and the Houston Oilers, and a Western division of the Los Angeles Chargers, Denver Broncos, Oakland Raiders, and Dallas Texans. The league first gained attention by signing 75% of the NFL's first-round draft choices in 1960, including Houston's successful signing of college star and Heisman Trophy winner Billy Cannon.
While the first years of the AFL saw uneven competition and low attendance, the league was buttressed by a generous television contract with the American Broadcasting Company (ABC) (followed by a contract with the competing National Broadcasting Company (NBC) for games starting with the 1965 season) that broadcast the more offense-oriented football league nationwide. Continuing to attract top talent from colleges and the NFL by the mid-1960s, as well as successful franchise shifts of the Chargers from L.A. south to San Diego and the Texans north to Kansas City (becoming the Kansas City Chiefs), the AFL established a dedicated following. The transformation of the struggling Titans into the New York Jets under new ownership further solidified the league's reputation among the major media.
As fierce competition made player salaries skyrocket in both leagues, especially after a series of "raids", the leagues agreed to a merger in 1966. Among the conditions were a common draft and a championship game played between the two league champions first played in early 1967, which would eventually become known as the Super Bowl.
The AFL and NFL operated as separate leagues until 1970, with separate regular season and playoff schedules except for the championship game. NFL Commissioner Pete Rozelle also became chief executive of the AFL from July 26, 1966, through the completion of the merger. During this time the AFL expanded, adding the Miami Dolphins and Cincinnati Bengals. After losses by Kansas City and Oakland in the first two AFL-NFL National Championship Games to the Green Bay Packers (1967/1968), the New York Jets and Kansas City Chiefs won Super Bowls III and IV (1969/1970) respectively, cementing the league's claim to being an equal to the NFL.
In 1970, the AFL was absorbed into the NFL and the league reorganized with the ten AFL franchises along with the previous NFL teams Baltimore Colts, Cleveland Browns, and Pittsburgh Steelers becoming part of the newly-formed American Football Conference.
During the 1950s, the National Football League had grown to rival Major League Baseball as one of the most popular professional sports leagues in the United States. One franchise that did not share in this newfound success of the league was the Chicago Cardinals — owned by the Bidwill family — who had become overshadowed by the more popular Chicago Bears. The Bidwills hoped to relocate their franchise, preferably to St. Louis, but could not come to terms with the league on a relocation fee. Needing cash, the Bidwills began entertaining offers from would-be investors, and one of the men who approached the Bidwills was Lamar Hunt, son and heir of millionaire oilman H. L. Hunt. Hunt offered to buy the Cardinals and move them to Dallas, where he had grown up. However, these negotiations came to nothing, since the Bidwills insisted on retaining a controlling interest in the franchise and were unwilling to move their team to a city where a previous NFL franchise had failed in 1952. While Hunt negotiated with the Bidwills, similar offers were made by Bud Adams, Bob Howsam, and Max Winter.
When Hunt, Adams, and Howsam were unable to secure a controlling interest in the Cardinals, they approached NFL commissioner Bert Bell and proposed the addition of expansion teams. Bell, wary of expanding the 12-team league and risking its newfound success, rejected the offer. On his return flight to Dallas, Hunt conceived the idea of an entirely new league and decided to contact the others who had shown interest in purchasing the Cardinals. He contacted Adams, Howsam, and Winter (as well as Winter's business partner, Bill Boyer) to gauge their interest in starting a new league. Hunt's first meeting with Adams was held in March 1959. Hunt, who felt a regional rivalry would be critical for the success of the new league, convinced Adams to join and found his team in Houston. Hunt next secured an agreement from Howsam to bring a team to Denver.
After Winter and Boyer agreed to start a team in Minneapolis-Saint Paul, the new league had its first four teams. Hunt then approached Willard Rhodes, who hoped to bring pro football to Seattle. However, the University of Washington was unwilling to let the fledgling league use Husky Stadium, probably due to the excessive wear and tear that would have been caused to the facility's grass surface (the stadium now has an artificial surface, and Seattle would gain entry into the NFL in 1976 with the Seattle Seahawks). With no place for his team to play, Rhodes' effort came to nothing. Hunt also sought franchises in Los Angeles, Buffalo and New York City. During the summer of 1959, he sought the blessings of the NFL for his nascent league, as he did not seek a potentially costly rivalry. Within weeks of the July 1959 announcement of the league's formation, Hunt received commitments from Barron Hilton and Harry Wismer to bring teams to Los Angeles and New York, respectively. His initial efforts for Buffalo, however, were rebuffed, when Hunt's first choice of owner, Pat McGroder, declined to take part; McGroder had hoped that the threat of the AFL would be enough to prompt the NFL to expand to Buffalo.
On August 14, 1959, the first league meeting was held in Chicago, and charter memberships were given to Dallas, New York, Houston, Denver, Los Angeles, and Minneapolis-Saint Paul. On August 22 the league officially was named the American Football League at a meeting in Dallas. The NFL's initial reaction was not as openly hostile as it had been with the earlier All-America Football Conference (Bell had even given his public approval), yet individual NFL owners soon began a campaign to undermine the new league. AFL owners were approached with promises of new NFL franchises or ownership stakes in existing ones. Only the party from Minneapolis-Saint Paul accepted, and the Minnesota group joined the NFL the next year in 1961; the Minneapolis-Saint Paul group were joined by Ole Haugsrud and Bernie Ridder in the new NFL team's ownership group, which was named the Minnesota Vikings. The older league also announced on August 29 that it had conveniently reversed its position against expansion, and planned to bring NFL expansion teams to Houston and Dallas, to start play in 1961. (The NFL did not expand to Houston at that time, the promised Dallas team – the Dallas Cowboys – actually started play in 1960, and the Vikings began play in 1961.) Finally, the NFL quickly came to terms with the Bidwills and allowed them to relocate the struggling Cardinals to St. Louis, eliminating that city as a potential AFL market.
Ralph Wilson, who owned a minority interest in the NFL's Detroit Lions at the time, initially announced he was placing a team in Miami, but like the Seattle situation, was also rebuffed by local ownership; given five other choices, Wilson negotiated with McGroder and brought the team that would become the Bills to Buffalo. Buffalo was officially awarded its franchise on October 28. During a league meeting on November 22, a 10-man ownership group from Boston (led by Billy Sullivan) was awarded the AFL's eighth team. On November 30, 1959, Joe Foss, a World War II Marine fighter ace and former governor of South Dakota, was named the AFL's first commissioner. Foss commissioned a friend of Harry Wismer's to develop the AFL's eagle-on-football logo. Hunt was elected President of the AFL on January 26, 1960.
The AFL's first draft took place the same day Boston was awarded its franchise, and lasted 33 rounds. The league held a second draft on December 2, which lasted for 20 rounds. Because the Raiders joined after the AFL draft, they inherited Minnesota's selections. A special allocation draft was held in January 1960, to allow the Raiders to stock their team, as some of the other AFL teams had already signed some of Minneapolis' original draft choices.
In November 1959, Minneapolis-Saint Paul owner Max Winter announced his intent to leave the AFL to accept a franchise offer from the NFL. In 1961, his team began play in the NFL as the Minnesota Vikings. Los Angeles Chargers owner Barron Hilton demanded that a replacement for Minnesota be placed in California, to reduce his team's operating costs and to create a rivalry. After a brief search, Oakland was chosen and an ownership group led by F. Wayne Valley and local real estate developer Chet Soda was formed. After initially being called the Oakland "Señores", the Oakland Raiders officially joined the AFL on January 30, 1960.
On June 9, 1960, the league signed a five-year television contract with ABC, which brought in revenues of approximately US$2,125,000 per year for the entire league. On June 17, the AFL filed an antitrust lawsuit against the NFL, which was dismissed in 1962 after a two-month trial. The AFL began regular-season play (a night game on Friday, September 9, 1960) with eight teams in the league — the Boston Patriots, Buffalo Bills, Dallas Texans, Denver Broncos, Houston Oilers, Los Angeles Chargers, New York Titans, and Oakland Raiders. Raiders' co-owner Wayne Valley dubbed the AFL ownership "The Foolish Club", a term Lamar Hunt subsequently used on team photographs he sent as Christmas gifts.
The Oilers became the first-ever league champions by defeating the Chargers, 24–16, in the AFL Championship on January 1, 1961. Attendance for the 1960 season was respectable for a new league, but not nearly that of the NFL. In 1960, the NFL averaged attendance of more than 40,000 fans per game and more popular NFL teams in 1960 regularly saw attendance figures in excess of 50,000 per game, while CFL attendances averaged approximately 20,000 per game. By comparison, AFL attendance averaged about 16,500 per game and generally hovered between 10,000-20,000 per game. Professional football was still primarily a gate-driven business in 1960, so low attendance meant financial losses. The Raiders, with a league-worst average attendance of just 9,612, lost $500,000 in their first year and only survived after receiving a $400,000 loan from Bills owner Ralph Wilson. In an early sign of stability, however, the AFL did not lose any teams after its first year of operation. In fact, the only major change was the relocation of the Chargers from Los Angeles to nearby San Diego (they would return to Los Angeles in 2017).
On August 8, 1961, the AFL challenged the Canadian Football League to an exhibition game that would feature the Hamilton Tiger-Cats and the Buffalo Bills, which was attended by 24,376 spectators. Playing at Civic Stadium in Hamilton, Ontario, the Tiger-Cats defeated the Bills 38–21 playing a mix of AFL and CFL rules.
While the Oilers found instant success in the AFL, other teams did not fare as well. The Oakland Raiders and New York Titans struggled on and off the field during their first few seasons in the league. Oakland's eight-man ownership group was reduced to just three in 1961, after heavy financial losses in their first season. Attendance for home games was poor, partly due to the team playing in the San Francisco Bay Area—which already had an established NFL team (the San Francisco 49ers)—but the product on the field was also to blame. After winning six games in their debut season, the Raiders won a total of three times in the 1961 and 1962 seasons. Oakland took part in a 1961 supplemental draft meant to boost the weaker teams in the league, but it did little good. They participated in another such draft in 1962.
The Raiders and Titans both finished last in their respective divisions in the 1962 season. The Texans and Oilers, winners of their divisions, faced each other for the 1962 AFL Championship on December 23. The Texans dethroned the two-time champion Oilers, 20–17, in a double-overtime contest that was, at the time, professional football's longest-ever game.
In 1963, the Texans became the second AFL team to relocate. Lamar Hunt felt that despite winning the league championship in 1962, the Texans could not succeed financially competing in the same market as the Dallas Cowboys, which entered the NFL as an expansion franchise in 1960. After meetings with New Orleans, Atlanta, and Miami, Hunt announced on May 22 that the Texans' new home would be Kansas City, Missouri. Kansas City mayor Harold Roe Bartle (nicknamed "Chief") was instrumental in his city's success in attracting the team. Partly to honor Bartle, the franchise officially became the Kansas City Chiefs on May 26.
The San Diego Chargers, under head coach Sid Gillman, won a decisive 51–10 victory over the Boston Patriots for the 1963 AFL Championship. Confident that his team was capable of beating the NFL-champion Chicago Bears (he had the Chargers' rings inscribed with the phrase "World Champions"), Gillman approached NFL Commissioner Pete Rozelle and proposed a final championship game between the two teams. Rozelle declined the offer; however, the game would be instituted three seasons later.
A series of events throughout the next few years demonstrated the AFL's ability to achieve a greater level of equality with the NFL. On January 29, 1964, the AFL signed a lucrative $36 million television contract with NBC (beginning in the 1965 season), which gave the league money it needed to compete with the NFL for players. Pittsburgh Steelers owner Art Rooney was quoted as saying to NFL Commissioner Pete Rozelle that "They don't have to call us 'Mister' anymore". A single-game attendance record was set on November 8, 1964, when 61,929 fans packed Shea Stadium to watch the New York Jets and Buffalo Bills.
The bidding war for players between the AFL and NFL escalated in 1965. The Chiefs drafted University of Kansas star Gale Sayers in the first round of the 1965 AFL draft (held November 28, 1964), while the Chicago Bears did the same in the NFL draft. Sayers eventually signed with the Bears. A similar situation occurred when the New York Jets and the NFL's St. Louis Cardinals both drafted University of Alabama quarterback Joe Namath. In what was viewed as a key victory for the AFL, Namath signed a $427,000 contract with the Jets on January 2, 1965 (the deal included a new car). It was the highest amount of money ever paid to a collegiate football player, and is cited as the strongest contributing factor to the eventual merger between the two leagues.
After the 1963 season, the Newark Bears of the Atlantic Coast Football League expressed interest in joining the AFL; concerns over having to split the New York metro area with the still-uncertain Jets were a factor in the Bears bid being rejected. In 1965, Milwaukee officials tried to lure an expansion team to play at Milwaukee County Stadium where the Green Bay Packers had played parts of their home schedule after an unsuccessful attempt to lure the Packers there full-time, but Packers head coach Vince Lombardi invoked the team's exclusive lease as well as sign an extension to keep some home games in Milwaukee until 1976. In early 1965, the AFL awarded its first expansion team to Rankin Smith of Atlanta. The NFL quickly counteroffered Smith a franchise, which Smith accepted; the Atlanta Falcons began play as an NFL franchise. In March 1965, Joe Robbie had met with Commissioner Foss to inquire about an expansion franchise for Miami. On May 6, after Atlanta's exit, Robbie secured an agreement with Miami mayor Robert King High to bring a team to Miami. League expansion was approved at a meeting held on June 7, and on August 16 the AFL's ninth franchise was officially awarded to Robbie and television star Danny Thomas. The Miami Dolphins joined the league for a fee of $7.5 million and started play in the AFL's Eastern Division in 1966. The AFL also planned to add two more teams by 1967.
In 1966, the rivalry between the AFL and NFL reached an all-time peak. On April 7, Joe Foss resigned as AFL commissioner. His successor was Oakland Raiders head coach and general manager Al Davis, who had been instrumental in turning around the fortunes of that franchise. No longer content with trying to outbid the NFL for college talent, the AFL under Davis started to recruit players already on NFL squads. Davis's strategy focused on quarterbacks in particular, and in two months he persuaded seven NFL quarterbacks to sign with the AFL. Although Davis's intention was to help the AFL win the bidding war, some AFL and NFL owners saw the escalation as detrimental to both leagues. Alarmed with the rate of spending in the league, Hilton Hotels forced Barron Hilton to relinquish his stake in the Chargers as a condition of maintaining his leadership role with the hotel chain.
The same month Davis was named commissioner, several NFL owners, along with Dallas Cowboys general manager Tex Schramm, secretly approached Lamar Hunt and other AFL owners and asked the AFL to merge. They held a series of secret meetings in Dallas to discuss their concerns over rapidly increasing player salaries, as well as the practice of player poaching. Hunt and Schramm completed the basic groundwork for a merger of the two leagues by the end of May, and on June 8, 1966, the merger was officially announced. Under the terms of the agreement, the two leagues would hold a common player draft. The agreement also called for a title game to be played between the champions of the respective leagues. The two leagues would be fully merged by 1970, NFL commissioner Pete Rozelle would remain as commissioner of the merged league, which would be named the NFL. Additional expansion teams would eventually be awarded by 1970 or soon thereafter to bring it to a 28-team league. The AFL also agreed to pay indemnities of $18 million to the NFL over 20 years. In protest, Davis resigned as AFL commissioner on July 25 rather than remain until the completion of the merger, and Milt Woodard was named president of the AFL, with the "commissioner" title vacated because of Rozelle's expanded role.
On January 15, 1967, the first-ever World Championship Game between the champions of the two separate professional football leagues, the AFL-NFL Championship Game (retroactively referred to as Super Bowl I), was played in Los Angeles. After a close first half, the NFL champion Green Bay Packers overwhelmed the AFL champion Kansas City Chiefs, 35–10. The loss reinforced for many the notion that the AFL was an inferior league. Packers head coach Vince Lombardi stated after the game, "I do not think they are as good as the top teams in the National Football League."
The second AFL-NFL Championship (Super Bowl II) yielded a similar result. The Oakland Raiders—who had easily beaten the Houston Oilers to win their first AFL championship—were overmatched by the Packers, 33–14. The more experienced Packers capitalized on a number of Raiders miscues and never trailed. Green Bay defensive tackle Henry Jordan offered a compliment to Oakland and the AFL, when he said, "... the AFL is becoming much more sophisticated on offense. I think the league has always had good personnel, but the blocks were subtler and better conceived in this game."
The AFL added its tenth and final team on May 24, 1967, when it awarded the league's second expansion franchise to an ownership group from Cincinnati, Ohio, headed by NFL legend Paul Brown. Although Brown had intended to join the NFL, he agreed to join the AFL when he learned that his team would be included in the NFL once the merger was completed. The Cincinnati Bengals began play in the 1968 season, finishing last in the Western Division.
Namath and the Jets made good on his guarantee as they held the Colts scoreless until late in the fourth quarter. The Jets won, 16–7, in what is considered one of the greatest upsets in American sports history. With the win, the AFL finally achieved parity with the NFL and legitimized the merger of the two leagues. That notion was reinforced one year later in Super Bowl IV, when the AFL champion Kansas City Chiefs upset the NFL champion Minnesota Vikings, 23–7, in the last championship game to be played between the two leagues. The Vikings, favored by 12½ points, were held to just 67 rushing yards.
The last game in AFL history was the AFL All-Star Game, held in Houston's Astrodome on January 17, 1970. The Western All-Stars, led by Chargers quarterback John Hadl, defeated the Eastern All-Stars, 26–3. Buffalo rookie back O.J. Simpson carried the ball for the last play in AFL history. Hadl was named the game's Most Valuable Player.
Prior to the start of the 1970 NFL season, the merged league was organized into two conferences of three divisions each. All ten AFL teams made up the bulk of the new American Football Conference. To avoid having an inequitable number of teams in each conference, the leagues voted to move three NFL teams to the AFC. Motivated by the prospect of an intrastate rivalry with the Bengals as well as by personal animosity toward Paul Brown, Cleveland Browns owner Art Modell quickly offered to include his team in the AFC. He helped persuade the Pittsburgh Steelers (the Browns' archrivals) and Baltimore Colts (who shared the Baltimore/Washington, D.C. market with the Washington Redskins) to follow suit, and each team received US $3 million to make the switch. All the other NFL squads became part of the National Football Conference.
Pro Football Hall of Fame receiver Charlie Joiner, who started his career with the Houston Oilers (1969), was the last AFL player active in professional football, retiring after the 1986 season, when he played for the San Diego Chargers.
The American Football League stands as the only professional football league to successfully compete against the NFL. When the two leagues merged in 1970, all ten AFL franchises and their statistics became part of the new NFL. Every other professional league that had competed against the NFL before the AFL–NFL merger had folded completely: the three previous leagues named "American Football League" and the All-America Football Conference. From an earlier AFL (1936–1937), only the Cleveland Rams (now the Los Angeles Rams) joined the NFL and are currently operating, as are the Cleveland Browns and the San Francisco 49ers from the AAFC. A third AAFC team, the Baltimore Colts (not related to the 1953–1983 Baltimore Colts or to the current Indianapolis Colts franchise), played only one year in the NFL, disbanding at the end of the 1950 season. The league resulting from the merger was a 26-team juggernaut (since expanded to 32) with television rights covering all of the Big Three television networks and teams in close proximity to almost all of the top 40 metropolitan areas, a fact that has precluded any other competing league from gaining traction since the merger; failed attempts to mimic the AFL's success included the World Football League (1974–75), United States Football League (1983–85), XFL (2001) and United Football League (2009–2012).
The AFL was also the most successful of numerous upstart leagues of the 1960s and 1970s that attempted to challenge a major professional league's dominance. All nine teams that were in the AFL at the time the merger was agreed upon were accepted into the league intact (as was the tenth team added between the time of the merger's agreement and finalization), and none of the AFL's teams have ever folded. For comparison, the World Hockey Association (1972–79) managed to have four of its six remaining teams merged into the National Hockey League, which actually caused the older league to contract a franchise, but WHA teams were forced to disperse the majority of their rosters and restart as expansion teams. The merged WHA teams were also not financially sound (in large part from the hefty expansion fees the NHL imposed on them), and three of the four were forced to relocate within 20 years. The American Basketball Association (1967–76) managed to have only four of its teams merged into the National Basketball Association, and the rest of the league was forced to fold. Both the WHA and ABA lost several teams to financial insolvency over the course of their existences. The Continental League, a proposed third league for Major League Baseball that was to begin play in 1961, never played a single game, largely because MLB responded to the proposal by expanding to four of that league's proposed cities. Historically, the only other professional sports league in the United States to exhibit a comparable level of franchise stability from its inception was the American League of Major League Baseball.
The NFL adopted some of the innovations introduced by the AFL immediately and a few others in the years following the merger. One was including the names on player jerseys. The older league also adopted the practice of using the stadium scoreboard clocks to keep track of the official game time, instead of just having a stopwatch used by the referee. The AFL played a 14-game schedule for its entire existence, starting in 1960. The NFL, which had played a 12-game schedule since 1947, changed to a 14-game schedule in 1961, a year after the American Football League instituted it. The AFL also introduced the two-point conversion to professional football thirty-four years before the NFL instituted it in 1994 (college football had adopted the two-point conversion in the late 1950s). All of these innovations pioneered by the AFL, including its more exciting style of play and colorful uniforms, have essentially made today's professional football more like the AFL than like the old-line NFL. The AFL's challenge to the NFL also laid the groundwork for the Super Bowl, which has become the standard for championship contests in the United States of America.
The NFL also adapted how the AFL used the growing power of televised football games, which were bolstered with the help of major network contracts (first with ABC and later with NBC). With that first contract with ABC, the AFL adopted the first-ever cooperative television plan for professional football, in which the proceeds were divided equally among member clubs. It featured many outstanding games, such as the classic 1962 double-overtime American Football League championship game between the Dallas Texans and the defending champion Houston Oilers. At the time it was the longest professional football championship game ever played. The AFL also appealed to fans by offering a flashier style of play (just like the ABA in basketball), compared to the more conservative game of the NFL. Long passes ("bombs") were commonplace in AFL offenses, led by such talented quarterbacks as John Hadl, Daryle Lamonica and Len Dawson.
Despite having a national television contract, the AFL often found itself trying to gain a foothold, only to come up against roadblocks. For example, CBS-TV, which broadcast NFL games, ignored and did not report scores from the innovative AFL, on orders from the NFL. It was only after the merger agreement was announced that CBS began to give AFL scores.
The AFL took advantage of the burgeoning popularity of football by locating teams in major cities that lacked NFL franchises. Hunt's vision not only brought a new professional football league to California and New York, but introduced the sport to Colorado, restored it to Texas and later to fast-growing Florida, as well as bringing it to New England for the first time in 12 years. Buffalo, having lost its original NFL franchise in 1929 and turned down by the NFL at least twice (1940 and 1950) for a replacement, returned to the NFL with the merger. The return of football to Kansas City was the first time that city had seen professional football since the NFL's Kansas City Blues/Cowboys of the 1920s; the arrival of the Chiefs, and the contemporary arrival of the St. Louis Football Cardinals, brought professional football back to Missouri for the first time since the temporary St. Louis Gunners of 1934.
If not for the AFL, at least 17 of today's NFL teams would probably never have existed: the ten teams from the AFL, and seven clubs that were instigated by the AFL's presence to some degree. Three NFL franchises were awarded as a direct result of the AFL's competition with the older league: the Minnesota Vikings, who were awarded to Max Winter in exchange for dropping his bid to join the AFL; the Atlanta Falcons, whose franchise went to Rankin Smith to dissuade him from purchasing the AFL's Miami Dolphins; and the New Orleans Saints, because of successful anti-trust legislation which let the two leagues merge, and was supported by several Louisiana politicians.
In the case of the Dallas Cowboys, the NFL had long sought to return to the Dallas area after the Dallas Texans folded in 1952, but was originally met with strong opposition by Washington Redskins owner George Preston Marshall, who had enjoyed a monopoly as the only NFL team to represent the American South. Marshall later changed his position after future-Cowboys owner Clint Murchison bought the rights to Washington's fight song "Hail to the Redskins" and threatened to prevent Marshall from playing it at games. By then, the NFL wanted to quickly award the new Dallas franchise to Murchison so the team could immediately begin play and complete with the AFL's Texans. As a result, the Cowboys played its inaugural season in 1960 without the benefit of the NFL draft.
As part of the merger agreement, additional expansion teams would be awarded by 1970 or soon thereafter to bring the league to 28 franchises; this requirement was fulfilled when the Seattle Seahawks and the Tampa Bay Buccaneers began play in 1976. In addition, had it not been for the existence of the Oilers from 1960 to 1996, the Houston Texans also would likely not exist today; the 2002 expansion team restored professional football in Houston after the original charter AFL member Oilers relocated to become the Tennessee Titans.
Kevin Sherrington of The Dallas Morning News has argued that the presence of AFL and the subsequent merger radically altered the fortunes of the Pittsburgh Steelers, saving the team "from stinking". Before the merger, the Steelers had long been one of the NFL's worst teams. Constantly lacking the money to build a quality team, the Steelers had only posted eight winning seasons, and just one playoff appearance, since their first year of existence in 1933 until the end of the 1969 season. They also finished with a 1-13 record in 1969, tied with the Chicago Bears for the worst record in the NFL. The $3 million indemnity that the Steelers received for joining the AFC with the rest of the former AFL teams after the merger helped them rebuild into a contender, drafting eventual-Pro Football Hall of Famers like Terry Bradshaw and Joe Greene, and ultimately winning four Super Bowls in the 1970s. Since the 1970 merger, the Steelers have the NFL's highest winning percentage, the most total victories, the most trips to either conference championship game, are tied for the second most trips to the Super Bowl (with the Dallas Cowboys and Denver Broncos, trailing only the New England Patriots), and have won an NFL-record six Super Bowl championships.
The AFL's free agents came from several sources. Some were players who could not find success playing in the NFL, while another source was the Canadian Football League. In the late 1950s, many players released by the NFL, or un-drafted and unsigned out of college by the NFL, went North to try their luck with the CFL, and later returned to the states to play in the AFL.
In the league's first years, players such as Oilers' George Blanda, Chargers/Bills' Jack Kemp, Texans' Len Dawson, the NY Titans' Don Maynard, Raiders/Patriots/Jets' Babe Parilli, Pats' Bob Dee proved to be AFL standouts. Other players such as the Broncos' Frank Tripucka, the Pats' Gino Cappelletti, the Bills' Cookie Gilchrist and the Chargers' Tobin Rote, Sam DeLuca and Dave Kocourek also made their mark to give the fledgling league badly needed credibility. Rounding out this mix of potential talent were the true "free agents", the walk-ons and the "wanna-be's", who tried out in droves for the chance to play professional American football.
After the AFL–NFL merger agreement in 1966, and after the AFL's Jets defeated the "best team in the history of the NFL", the Colts, a popular misconception fostered by the NFL and spread by media reports was that the AFL defeated the NFL because of the Common Draft instituted in 1967. This apparently was meant to assert that the AFL could not achieve parity as long as it had to compete with the NFL in the draft. But the 1968 Jets had less than a handful of "common draftees". Their stars were honed in the AFL, many of them since the Titans days. As noted below, the AFL got its share of stars long before the "common draft".
Players who chose the AFL to develop their talent included Lance Alworth and Ron Mix of the Chargers, who had also been drafted by the NFL's San Francisco 49ers and Baltimore Colts respectively. Both eventually were elected to the Pro Football Hall of Fame after earning recognition during their careers as being among the best at their positions. Among specific teams, the 1964 Buffalo Bills stood out by holding their opponents to a pro football record 913 yards rushing on 300 attempts, while also recording fifty quarterback sacks in a 14-game schedule.
Another example is cited by the University of Kansas website, which describes the 1961 Bluebonnet Bowl, won by KU, and goes on to say "Two Kansas players, quarterback John Hadl and fullback Curtis McClinton, signed professional contracts on the field immediately after the conclusion of the game. Hadl inked a deal with the [AFL] San Diego Chargers, and McClinton went to the [AFL] Dallas Texans." Between them, in their careers Hadl and McClinton combined for an American Football League Rookie of the Year award, seven AFL All-Star selections, two Pro Bowl selections, a team MVP award, two AFL All-Star Game MVP awards, two AFL championships, and a World Championship. And these were players selected by the AFL long before the "Common Draft".
In 2009, a five-part series, , on the Showtime Network, refuted many of the long-held misconceptions about the AFL. In it, Abner Haynes tells of how his father forbade him to accept being drafted by the NFL, after drunken scouts from that league had visited the Haynes home; the NFL Cowboys' Tex Schramm is quoted as saying that if his team had ever agreed to play the AFL's Dallas Texans, they would very likely have lost; George Blanda makes a case for more AFL players being inducted to the Pro Football Hall of Fame by pointing out that Hall of Famer Willie Brown was cut by the Houston Oilers because he couldn't cover Oilers flanker Charlie Hennigan in practice. Later, when Brown was with the Broncos, Hennigan needed nine catches in one game against the Broncos to break Lionel Taylor's Professional Football record of 100 catches in one season. Hennigan caught the nine passes and broke the record, even though he was covered by Brown, Blanda's point being that if Hennigan could do so well against a Hall of Fame DB, he deserves induction, as well.
The AFL also spawned coaches whose style and techniques have profoundly affected the play of professional football to this day. In addition to AFL greats like Hank Stram, Lou Saban, Sid Gillman and Al Davis were eventual hall of fame coaches such as Bill Walsh, a protégé of Davis with the AFL Oakland Raiders for one season; and Chuck Noll, who worked for Gillman and the AFL LA/San Diego Chargers from 1960 through 1965. Others include Buddy Ryan (AFL's New York Jets), Chuck Knox (Jets), Walt Michaels (Jets), and John Madden (AFL's Oakland Raiders). Additionally, many prominent coaches began their pro football careers as players in the AFL, including Sam Wyche (Cincinnati Bengals), Marty Schottenheimer (Buffalo Bills), Wayne Fontes (Jets), and two-time Super Bowl winner Tom Flores (Oakland Raiders). Flores also has a Super Bowl ring as a player (1969 Kansas City Chiefs).
See main article: 2009 NFL season. As the influence of the AFL continues through the present, the 50th anniversary of its launch was celebrated during 2009. The season-long celebration began in August with the 2009 Pro Football Hall of Fame Game in Canton, Ohio between two AFC teams (as opposed to the AFC-vs-NFC format the game first adopted in 1971). The opponents were two of the original AFL franchises, the Buffalo Bills and Tennessee Titans (the former Houston Oilers). Bills' owner Ralph C. Wilson Jr. (a 2009 Hall of Fame inductee) and Titans' owner Bud Adams were the only surviving members of the Foolish Club at the time (both are now deceased), the eight original owners of AFL franchises.
The Hall of Fame Game was the first of several "Legacy Weekends", during which each of the "original eight" AFL teams sported uniforms from their AFL era. Each of the 8 teams took part in at least two such "legacy" games. On-field officials also wore red-and-white-striped AFL uniforms during these games.
In the fall of 2009, the Showtime pay-cable network premiered , a 5-part documentary series produced by NFL Films that features vintage game film and interviews as well as more recent interviews with those associated with the AFL.
The NFL sanctioned a variety of "Legacy" gear to celebrate the AFL anniversary, such as "throwback" jerseys, T-shirts, signs, pennants and banners, including items with the logos and colors of the Dallas Texans, Houston Oilers, and New York Titans, the three of the Original Eight AFL teams which have changed names or venues. A December 5, 2009 story by Ken Belson in The New York Times quotes league officials as stating that AFL "Legacy" gear made up twenty to thirty percent of the league's annual $3 billion merchandise income. Fan favorites were the Denver Broncos' vertically striped socks, which could not be re-stocked quickly enough.
Eastern Boston Patriots 1960 Nickerson Field (1960–1962), Fenway Park (1963–1968), Alumni Stadium (1969) 64–69–9 0 Still active in the Greater Boston area. Moved to Foxborough, Massachusetts as the New England Patriots in 1971.
Buffalo Bills 1960 War Memorial Stadium (1960–1969) 67–71–6 2 Still active in the Buffalo–Niagara Falls metropolitan area. Moved to Orchard Park, New York in 1973.
Houston Oilers 1960 Jeppesen Stadium (1960–1964), Rice Stadium (1965–1967), Houston Astrodome (1968–1969) 72–69–4 2 Relocated to Memphis, Tennessee as the Tennessee Oilers in 1997, moved to Nashville, Tennessee in 1998, and renamed as the Tennessee Titans in 1999.
Miami Dolphins 1966 Miami Orange Bowl (1966–1969) 15–39–2 0 Still active in the Miami metropolitan area. In 2003, their home stadium, which previously had a Miami address, became part of Miami Gardens, Florida.
New York Titans/Jets 1960 Polo Grounds (1960–1963), Shea Stadium (1964–1969) 71–67–6 1 Still active in the New York metropolitan area. Moved to East Rutherford, New Jersey in 1984.
Western Cincinnati Bengals 1968 Nippert Stadium (1968–1969) 7–20–1 0 Still active in Cincinnati.
Dallas Texans/Kansas City Chiefs 1960 Cotton Bowl (1960–1962), Municipal Stadium (1963–1969) 92–50–5 3 Still active in Kansas City.
Denver Broncos 1960 Bears Stadium/Mile High Stadium (1960–1969) 39–97–4 0 Still active in Denver.
Los Angeles/San Diego Chargers 1960 Los Angeles Memorial Coliseum (1960), Balboa Stadium (1961–1966), San Diego Stadium (1967–1969) 88–51–6 1 Returned to Los Angeles in 2017.
Oakland Raiders 1960 Kezar Stadium (1960), Candlestick Park (1961), Frank Youell Field (1962–1965), Oakland–Alameda County Coliseum (1966–1969) 80–61–5 1 Relocated to Los Angeles in 1982, then returned to Oakland in 1995. Planning to relocate to Las Vegas, Nevada in 2019 or 2020.
Today, two of the NFL's eight divisions are composed entirely of former AFL teams, the AFC West (Broncos, Chargers, Chiefs, and Raiders) and the AFC East (Bills, Dolphins, Jets, and Patriots). Additionally, the Bengals now play in the AFC North and the Tennessee Titans (formerly the Oilers) play in the AFC South.
As of the 2017 NFL season, the Oakland–Alameda County Coliseum and the Los Angeles Memorial Coliseum are the last remaining active NFL stadiums that had been used by the AFL, with the remaining stadiums either being used for other uses (the former San Diego Stadium, Fenway Park, Nickerson Field, Alumni Stadium, Nippert Stadium, the Cotton Bowl, Balboa Stadium and Kezar Stadium), still standing but currently vacant (Houston Astrodome), or demolished. By the 2020 NFL season, both stadiums will be retired as the Raiders will move into a newly-built stadium in Las Vegas while the Los Angeles Rams will move into the all-new Los Angeles Stadium at Hollywood Park.
From 1960 to 1968, the AFL determined its champion via a single-elimination playoff game between the winners of its two divisions. The home teams alternated each year by division, so in 1968 the Jets hosted the Raiders, even though Oakland had a better record (this was changed in 1969). In 1963, the Buffalo Bills and Boston Patriots finished tied with identical records of 7–6–1 in the AFL East Division. There was no tie-breaker protocol in place, so a one-game playoff was held in War Memorial Stadium in December. The visiting Patriots defeated the host Bills 26–8. The Patriots traveled to San Diego as the Chargers completed a three-game season sweep over the weary Patriots with a 51–10 victory. A similar situation occurred in the 1968 season, when the Oakland Raiders and the Kansas City Chiefs finished the regular season tied with identical records of 12–2 in the AFL West Division. The Raiders beat the Chiefs 41–6 in a division playoff to qualify for the AFL Championship Game. In 1969, the final year of the independent AFL, Professional Football's first "wild card" playoffs were conducted. A four-team playoff was held, with the second-place teams in each division playing the winner of the other division. The Chiefs upset the Raiders in Oakland 17–7 in the league's Championship, the final AFL game played. The Kansas City Chiefs were the first Super Bowl champion to win two road playoff games and the first wildcard team to win the Super Bowl, although the term "wildcard" was coined by the media, and not used officially until several years later.
The AFL did not play an All-Star game after its first season in 1960, but did stage All-Star games for the 1961 through 1969 seasons. All-Star teams from the Eastern and Western divisions played each other after every season except 1965. That season, the league champion Buffalo Bills played all-stars from the other teams.
After the 1964 season, the AFL All-Star game had been scheduled for early 1965 in New Orleans' Tulane Stadium. After numerous black players were refused service by a number of area hotels and businesses, black and white players alike called for a boycott. Led by Bills players such as Cookie Gilchrist, the players successfully lobbied to have the game moved to Houston's Jeppesen Stadium.
The following is a sample of some records set during the existence of the league. The NFL considers AFL statistics and records equivalent to its own.
. Paul Brown. Jack Clary. PB, The Paul Brown Story. 1979. Atheneum. New York. 0-689-10985-7.
Book: Dickey, Glenn. Just Win, Baby: Al Davis & His Raiders. Harcourt, Brace, Jovanovich. 1991. New York. 0-15-146580-0.
Book: Gruver, Ed. The American Football League: A Year-By-Year History, 1960–1969. 1997. McFarland & Company, Inc. Jefferson, North Carolina. 0-7864-0399-3.
History: The AFL – Pro Football Hall of Fame (link).
Book: Maiorana, Sal. Relentless: The Hard-Hitting History of Buffalo Bills Football. 1994. Quality Sports Publications. Lenexa, Kansas. 1-885758-00-6.
Book: Miller, Jeff. Going Long: The Wild Ten-Year Saga of the Renegade American Football League In the Words of Those Who Lived It. 2003. McGraw-Hill. 0-07-141849-0.
Book: Shamsky, Art. Barry Zeman. The Magnificent Seasons: How the Jets, Mets, and Knicks Made Sports History and Uplifted a City and the Country. 2004. Thomas Dunne Books. New York. 0-312-33358-7.
News: New Pact to Last at Least 3 Years: Woodard's Position Unsure After 1970 Merger. Milligan. Lloyd. 26 July 1966. The New York Times. 25 April 2018.
Gruver, The American Football League, p. 9.
Gruver, The American Football League, p. 13.
Gruver, The American Football League, pp. 13–14.
Gruver, The American Football League, p. 14.
Gruver, The American Football League, pp. 15–16.
Miller, Going Long, pp. 3–4.
Web site: Kansas City Chiefs History – AFL Origins. 2007-02-07. https://web.archive.org/web/20070205213037/http://www.kcchiefs.com/history/. 2007-02-05. yes.
Warren, Matt. September 4, 1985 – McGroder Joins The Wall Of Fame. BuffaloRumblings.com. Retrieved March 26, 2014.
Gruver, The American Football League, pp. 22–23.
Web site: NFL History, 1951–1960. 2007-02-08. NFL.com. https://web.archive.org/web/20070209180120/http://www.nfl.com/history/chronology/1951-1960. 9 February 2007. no.
News: Rich. Loup. The AFL: A Football Legacy (Part One). CNNSI.com. 2001-01-22. 2007-02-08.
News: Al. Carter. Oilers leave rich legacy of low-budget absurdity. The Dallas Morning News. 1997-06-30. 2007-02-08. https://web.archive.org/web/20070106015329/http://texnews.com/texsports97/oilers063097.html. 6 January 2007. yes.
News: Mickey. Herskowitz. The Foolish Club. Pro Football Weekly. 1974. 2007-02-08. PDF. https://web.archive.org/web/20070605071618/http://www.kcchiefs.com/media/misc/5_the_foolish_club.pdf. 2007-06-05. yes.
Web site: Canadian Football League 1960 Attendance on CFLdb Statistics.
Steve Sabol (Executive Producer). 2004. Raiders – The Complete History. DVD. NFL Productions LLC.
News: Touch down in T.O.. The Globe and Mail. en-ca. 2017-01-19.
Web site: NFL History, 1961–1970. 2007-02-08. NFL.com. https://web.archive.org/web/20070205052436/http://www.nfl.com/history/chronology/1961-1970. 5 February 2007. no.
Web site: New York Jets history. 2007-02-08. Sports Encyclopedia. https://web.archive.org/web/20070210123412/http://www.sportsecyclopedia.com/nfl/nyj/jets.html. 10 February 2007. no.
Web site: Jets history – 1962. 2007-02-08. NewYorkJets.com. https://web.archive.org/web/20061114025135/http://www.newyorkjets.com/team/history?year=1962. 2006-11-14. yes.
Web site: Jets history – 1963. 2007-02-08. NewYorkJets.com. https://web.archive.org/web/20061114025303/http://www.newyorkjets.com/team/history?year=1963. 2006-11-14. yes.
Web site: 1962 standings. 2007-02-08. Pro-Football-Reference.com. https://web.archive.org/web/20070207123656/http://www.pro-football-reference.com/years/1962.htm. 7 February 2007. no.
Web site: Chiefs timeline – 1960s. 2007-02-08. KCChiefs.com. https://web.archive.org/web/20070124191953/http://www.kcchiefs.com/history/60s/. 2007-01-24. yes.
Web site: Gillman laid foundation for all who followed. 2007-02-08. Barber. Phil. NFL.com. yes. https://web.archive.org/web/20051108064658/http://www.nfl.com/news/story/6101341. 8 November 2005.
News: Steve. Silverman. The 'Other' League. Pro Football Weekly. 1994-11-07. 2007-02-08. PDF. https://web.archive.org/web/20070605071617/http://www.kcchiefs.com/media/misc/11_the_other_league.pdf. 2007-06-05. yes.
News: Bears Seek Data on AFL. Asbury Park Press. Associated Press. January 12, 1964.
Web site: Miami Dolphins Historical Highlights. 2007-02-08. MiamiDolphins.com. https://web.archive.org/web/20070207055603/http://www.miamidolphins.com/newsite/history/historicalhighlights/historicalhighlights.asp. 7 February 2007. yes.
Web site: Barron Hilton's Chargers turned short stay into long-term success. Bill. Dwyre. 30 November 2009. LA Times.
News: Woodard in, Davis out in AFL. Milwaukee Sentinel. UPI. July 26, 1966. 2, part 2.
News: B. Duane. Cross. The AFL: A Football Legacy (Part Two). CNNSI.com. 2001-01-22. 2007-02-08.
News: Tex. Maule. Green Bay, Handily. Sports Illustrated. 1968-01-22. 2007-02-09.
Web site: He guaranteed it. 2007-02-09. Pro Football Hall of Fame.
Web site: Baltimore Colts history. 2007-02-09. Sports Encyclopedia. https://web.archive.org/web/20070210173055/http://www.sportsecyclopedia.com/nfl/balticolts/baltcolts.html. 10 February 2007. no.
News: Phil. Jackman. Lifetime guarantee; Jets-Colts. Baltimore Sun. 1999-01-12. 2007-02-09. https://web.archive.org/web/20070930014536/http://www.baltimoresun.com/sports/football/bal-mackey011299%2C0%2C4077047.story?coll=bal-sports-football. 2007-09-30. no.
Web site: Page 2's List for top upset in sports history. 2007-02-09. Page2. https://web.archive.org/web/20070221035618/http://espn.go.com/page2/s/list/010523upset.html. 21 February 2007. no.
News: Bob. Wankel. Eagles can win with right strategy. The Courier-Post. 2005-02-01. 2007-02-09.
News: Kenneth. Gooden. Can Hornets match greatest all-time upsets?. The State Hornet. 2003-11-19. 2007-02-09. yes. https://web.archive.org/web/20070927074841/http://media.www.statehornet.com/media/storage/paper1146/news/2003/11/19/Sports/Can-Hornets.Match.Greatest.AllTime.Upsets-2422553.shtml?sourcedomain=www.statehornet.com&MIIHost=media.collegepublisher.com. 2007-09-27.
Shamsky, The Magnificent Seasons, p. 5.
Web site: Super Bowl IV box score. 2007-02-09. SuperBowl.com. https://web.archive.org/web/20070101112306/http://www.superbowl.com/history/boxscores/game/sbiv. 2007-01-01. yes.
Web site: 1970 AFL All-Star Game recap. 2007-02-09.
News: Gordon. Forbes. This time, realignment will be cool breeze. USA Today. 2001-03-22. 2007-02-09. https://web.archive.org/web/20040829161949/http://lists.rollanet.org/pipermail/rampage/Week-of-Mon-20010319/001092.html. 2004-08-29. no.
Web site: Moment 26: Enter Art. 2007-02-09. ClevelandBrowns.com. https://web.archive.org/web/20071010070953/http://www.clevelandbrowns.com/article.php?id=6085. 2007-10-10. yes.
Web site: =NBC gains broadcast rights to American Football League. NBC Sports History Page.
Web site: Dallas meeting in '66 saved Steelers from stinking. Kevin. Sherrington. The Dallas Morning News. 2011-02-01. 2011-02-06.
Book: Jim Acho. The "Foolish Club". Gridiron Press. 1997. 38596883. Foreword by Miller Farr.
Book: Charles K. Ross. Outside the Lines: African Americans and the Integration of the National Football League. New York University Press. 1999. 0-8147-7495-4.
Web site: Black football players boycott AFL All-Star game. 2007-02-09. The African American Registry. https://web.archive.org/web/20061225020026/http://www.aaregistry.com/african_american_history/1950/Black_football_players_boycott_AFL_AllStar_game. 2006-12-25. yes.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "American Football League". |
0.901207 | Does meat from animals fed antibiotics contain antibiotics when we eat it?
In short, no. The use of veterinary medicines – including antibiotics – can sometimes result in low concentrations of the medicine being present within the animal’s system for a period of time. This is usually at a low level – measured in parts per million. Strict withdrawal periods are stipulated for each licensed medicine. These are based on rigorous testing regimes, and give time for medicines to be excreted from the animal or fall to a level that will not cause any adverse reaction in man should they be eaten. This means medicines must have almost entirely left the animal body by the time meat or milk can enter the food chain. In summary, the current debate is not about antibiotics found in food, but whether resistant bacteria are found in food and can they be transmitted to man. |
0.897387 | How does alcohol fit into the low carb lifestyle?
If you are actively trying to lose weight then avoid alcohol or in the very least cut down on your alcohol. The occasional glass however when your weight has stabilised might in fact do more good than harm. Alcohol is not metabolised as carbohydrate and does not send blood sugar into orbit, however alcohol will become your body’s first energy source before carbohydrate or fat, hence not the best option when actively trying to lose weight. There are vastly differing carb counts in different types of alcohol. Wine, red and white are better options than beer (sorry guys, keep those beers to a minimum or consider low carb options), and if you drink sprits use diet options for mixers.
Do yourself a favour though, and keep alcohol consumption to a minimum. You are more likely to make the wrong food choices when drinking alcohol and very likely to make the wrong food choices the following day after over indulging! |
0.968387 | Bipolar and narcissism: Is there a link?
Is there a link between bipolar and narcissism?
Bipolar disorders are mood disorders that cause extreme high and low moods. During a manic episode, symptoms of bipolar might be confused with narcissistic traits, such as a heightened sense of importance or lack of empathy.
Narcissism is not a symptom of bipolar, and most people with bipolar are not narcissistic. However, some people with bipolar may display narcissistic traits as a result of their other symptoms.
In this article, we take a look at the relationship between bipolar disorder and narcissism, including symptoms and treatment.
What are bipolar and narcissism?
Narcissism is characterized by feelings of grandiosity and self-importance.
Bipolar disorders are mood disorders that cause a person to cycle between extremely high moods, called mania, and in some cases, depression. A person may have bipolar I disorder or bipolar II disorder.
A related condition, called cyclothymic disorder, involves cycling between less intense manic and depressive episodes.
Narcissism is a personality trait that involves feelings of self-importance, grandiosity, and a need for validation. Narcissism can be a behavior that occurs in otherwise psychologically healthy people.
A person whose personality is characterized by narcissistic tendencies may have narcissistic personality disorder (NPD).
NPD is part of a group of personality disorders called cluster B disorders. These conditions are characterized by dramatic, emotional, or unpredictable thinking and behavior.
The Diagnostic and Statistical Manual of Mental Disorders 5 (DSM-5) does not list narcissism as a symptom of bipolar disorder. However, when a person with bipolar experiences an episode of mania, they may display some narcissistic behaviors, such as high levels of confidence, feelings of self-importance, elevated energy levels, and grandiose self-perceptions.
Because bipolar and NPD have some similar symptoms, the two conditions can be confused. This can result in people with bipolar being diagnosed with NPD and vice versa.
During periods of depression, a person with a bipolar disorder might also display narcissistic characteristics. For example, a person might neglect caring duties, avoid social contact, or appear insensitive to the needs of others.
This might seem to be narcissistic, but it is more likely that the person is so overwhelmed by their own negative emotions that they may not notice others people's feelings.
To diagnose someone with a personality disorder such as NPD, a doctor must be sure that another condition cannot better explain their symptoms. So, when narcissistic behavior is due to depression or mania, the DSM-5 argues that it is not appropriate to make a diagnosis of NPD.
Bipolar disorders are mood disorders characterized by extreme high and low moods. Learn more about these conditions here.
People with bipolar disorders experience intense mood swings that last for a period of time. Mania must last at least 7 days or less if the symptoms are so severe that hospitalization is required. To receive a diagnosis for the major depressive episode, a person must exhibit the symptoms of depression for at least 2 weeks.
A person with bipolar I disorder may only have manic symptoms.
These mood swings that people with bipolar experience occur independently of other life circumstances that can cause high and low moods. Also, these fluctuations are more pronounced than the mood swings most people experience.
Manic or hypomanic episodes: periods of a highly inflated mood that may include high self-esteem, increased sense of self-worth, high energy, little sleep, or aggression.
Depressive episodes: periods of a depressed mood that may cause intense sadness, guilt, shame, excessive sleep, low energy, and hopelessness.
To be diagnosed with narcissistic personality disorder, a person must display narcissism that significantly interferes with their relationships or functioning.
Managing extreme emotions may be helped by talking therapies.
Bipolar is a chronic condition. There is no cure, but it is treatable. Most people with bipolar can learn how to manage their symptoms to lead a happy, healthy life.
Medication. Mood medication can help people with bipolar have fewer and less severe mood swings. Lithium, a mood stabilizer, is one of the most popular bipolar treatments. Some people also take antidepressant drugs, antipsychotic drugs, or anti-anxiety medication.
Therapy. Talking therapy and behavioral therapy, such as cognitive behavioral therapy (CBT), can help people identify, understand, and better manage extreme emotions. It may also support people with bipolar to make healthy lifestyle changes.
Alternative medicine. Complementary remedies may help some people with bipolar, though research is mixed or inconclusive. Herbal supplements such as St. John's wort may not be safe to use with some bipolar medications, so it is important to discuss alternative medicine with a doctor. Some people with bipolar also find that acupuncture and lifestyle changes, such as exercise and diet changes, can help.
Electroconvulsive therapy (ECT). For people who do not see improvements in their symptoms with medication and treatment, electroconvulsive therapy (ECT) may help. ECT delivers a mild shock to the brain. Doctors are still not sure why or how it works, but it does reduce symptoms of bipolar and some other mental health conditions.
An accurate diagnosis is critical for managing bipolar, especially when it co-occurs with narcissistic personality traits. People who think they have a mental health condition should work with a skilled clinician and should not self-diagnose or self-medicate.
Narcissistic personality disorder and bipolar disorders can be frustrating both for the people they affect and for those who love them.
What looks like narcissism in a person with bipolar might be something else. Likewise, people with narcissistic personality disorder might be incorrectly diagnosed with bipolar.
Narcissistic traits that can come with bipolar disorders are not a choice. It does not mean someone is a bad person. Bipolar disorders are treatable medical conditions.
Using narcissism to label a person as bad can be harmful, may undermine the problematic reality many people with mental health problems face, and can even deter treatment. A 2014 report argues that stigma is a significant barrier to people accessing quality mental health care.
Quality treatment requires an accurate diagnosis. With proper treatment and a strong relationship with a skilled provider, people with narcissism and bipolar can heal, have good relationships with others, and live happy lives.
Villines, Zawn. "Is there a link between bipolar and narcissism?." Medical News Today. MediLexicon, Intl., 30 May. 2018. Web. |
0.956862 | The Crown of Stephen Bocskai is a crown given by the Ottoman sultan to Stephen Bocskai, Prince of Hungary and Transylvania, in the early 17th century. It was produced from gold, rubies, spinels, emeralds, turquoises, pearls and silk (height 0,235 m, weight 1,88 kg).
To save the independence of Transylvania, Bocskay assisted the Turks. In 1605, as a reward for his part in driving Basta out of Transylvania, the Hungarian Diet assembled at Medgyes/Mediasch (Mediaş) elected him Prince of Transylvania; in response the Ottoman sultan Ahmed I sent a special envoy to greet Bocskay and presented him with a splendid jewelled crown made in Persia. Bocskay refused the royal dignity, but made skillful use of the Turkish alliance.
The crown is today displayed in the Kaiserliche Schatzkammer (Imperial Treasury) at the Hofburg in Vienna.
Sigismund Báthory was Prince of Transylvania several times between 1586 and 1602, and Duke of Racibórz and Opole in Silesia in 1598. His father, Christopher Báthory, ruled Transylvania as voivode of the absent prince, Stephen Báthory. Sigismund was still a child when the Diet of Transylvania elected him voivode at his dying father's request in 1581. Initially, regency councils administered Transylvania on his behalf, but Stephen Báthory made János Ghyczy the sole regent in 1585. Sigismund adopted the title of prince after Stephen Báthory died.
Gabriel Bethlen was Prince of Transylvania from 1613 to 1629 and Duke of Opole from 1622 to 1625. He was also King-elect of Hungary from 1620 to 1621, but he never took control of the whole kingdom. Bethlen, supported by the Ottomans, led his Calvinist principality against the Habsburgs and their Catholic allies.
The King of Hungary was the ruling head of state of the Kingdom of Hungary from 1000 to 1918. The style of title "Apostolic King of Hungary" was endorsed by Pope Clement XIII in 1758 and used afterwards by all Monarchs of Hungary.
John Sigismund Zápolya or Szapolyai was King of Hungary as John II from 1540 to 1551 and from 1556 to 1570, and the first Prince of Transylvania, from 1570 to his death. He was the only son of John I, King of Hungary, and Isabella of Poland. John I ruled parts of the Kingdom of Hungary, with the support of the Ottoman Sultan Suleiman; the remaining areas were ruled by Ferdinand I, who also claimed Hungary. The two kings concluded a peace treaty in 1538 acknowledging Ferdinand's right to reunite Hungary after John I's death, but shortly after John Sigismund's birth, and on his deathbed, John I bequeathed his realm to his son. The late king's staunchest supporters elected the infant John Sigismund king, but he was not crowned with the Holy Crown of Hungary.
Gabriel Báthory was Prince of Transylvania from 1608 to 1613. He was the nephew of Andrew Báthory, who was prince of Transylvania in 1599. After his father died in 1601, the wealthy Stephen Báthory became his guardian and converted him from Catholicism to Calvinism. He sent Gabriel to the court of Stephen Bocskai in Kassa in Royal Hungary in early 1605. Gabriel inherited Stephen's estates, which made him one of the wealthiest noblemen in Bocskai's realm. Bocskai allegedly regarded Gabriel as his successor, but Bálint Drugeth was named as his heir in his December 1606 will.
Stephen Bocskai or Bocskay was Prince of Transylvania and Hungary from 1605 to 1606. He was born to a Hungarian noble family. His father's estates were located in the eastern regions of the medieval Kingdom of Hungary, which developed into the Principality of Transylvania in the 1570s. He spent his youth in the court of the Holy Roman Emperor, Maximilian, who was also the ruler of Royal Hungary.
Sigismund Rákóczi was Prince of Transylvania from 1607 to 1608. He was the son of János Rákóczi, a lesser nobleman with estates in Upper Hungary. Sigismund began a military career as the sword-bearer of the wealthy Gábor Perényi in Sárospatak. After Perényi died in 1567, Sigismund served in the royal fortresses of Eger and Szendrő. The royal chamber mortgaged him several estates to compensate him for unpaid salaries. He received Szerencs in 1580, which enabled him to engage in the lucrative Tokaji wine trade. He took possession of the large estates of András Mágóchy's minor sons as their guardian, and the second husband of their mother Judit Alaghy, in 1587.
The Principality of Transylvania was a semi-independent state, ruled primarily by Hungarian princes. Its territory, in addition to the traditional Transylvanian lands, also included eastern regions of Hungary, called Partium. The establishment of the principality was connected with Treaty of Speyer. However Stephen Báthory's status as king of Poland also helped to phase in the name Principality of Transylvania. It was usually under the suzerainty of the Ottoman Empire; however, the principality often had dual vassalage in the 16th and 17th centuries.
George I Rákóczi was Prince of Transylvania from 1630 until his death in 1648.
The Eastern Hungarian Kingdom is a modern term used by historians to designate the realm of John Zápolya and his son John Sigismund Zápolya, who contested the claims of the House of Habsburg to rule the Kingdom of Hungary from 1526 to 1570. The Zápolyas ruled over an eastern part of Hungary, while the Habsburg kings ruled the west. The Habsburgs tried several times to unite all Hungary under their rule, but the Ottoman Empire prevented this by supporting the Eastern Hungarian Kingdom.
Andrew Báthory was the Cardinal-deacon of Sant'Adriano al Foro from 1584 to 1599, Prince-Bishop of Warmia from 1589 to 1599, and Prince of Transylvania in 1599. His father was a brother of Stephen Báthory, who ruled the Polish–Lithuanian Commonwealth from 1575. He was the childless Stephen Báthory's favorite nephew. He went to Poland at his uncle's invitation in 1578 and studied at the Jesuit college in Pułtusk. He became canon in the Chapter of the Roman Catholic Diocese of Warmia in 1581, and provost of the Monastery of Miechów in 1583.
The Bocskai uprising was a great revolt in Hungary, Transylvania and modern Slovakia, between 1604 and 1606 against Rudolf II, Holy Roman Emperor, during the Long Turkish War. The leader of the rebels was István Bocskai, a significant Protestant Hungarian nobleman. The great Ottoman war burdened the Hungarian Kingdom and led to famine and epidemics. The armies of the Christian states also destroyed as the Ottoman and Tatar forces. |
0.99955 | Now it's time to drop my store to a final test. I would really appreciate any suggestion. Thank you very much!
Your store looks very inviting - I love the theme colors and the fancy title fonts. I understand why you'd want to combine the cat products and music products, however you'd earn more authority and trust by specializing in one niche. People will feel more confident buying from you when they see you focus and specialize in one niche. They'd feel they can trust your expertise more.
Also, make sure to be consistent with your collections - for example you have a pupyy collar https://www.bflatcat.com/collections/for-cat-lovers/products/piano-puppy-collar in your cat's collection.
Your About us page is excellent - you build trust by telling what inspired you to open this store; perfect. Consider also adding a photo of yourself [or anyone involved with making the store] to build a rapport and trust further.
It will help you build trust, tailor the content to your own store and avoid any duplicate content issues, which might affect your Google rank.
Shipping policy - consider adding this page with a table with the delivery rates at the top, so people know right away how much you are charging, and whether you ship internationally or not. Many online shoppers often look for this info before placing an order, to find out the shipping cost, so having a detailed policy can also help you boost your conversion rate.
Contact us - consider adding this page as well. It will be much more convenient for your customers to get in touch with you, then to copy the email address from the footer, go to their mailbox, paste it and then email you - it’s quite a drag to do it for online users - which might affect your conversion rate. Make sure it’s easy to contact you and people will be happy to get in touch to discuss their custom orders.
Consider using page.contact as a template for this page in your Shopify admin, so the contact form is displayed automatically. This way people can contact you directly from your website.
Rockpapercopy Hi Maggie, thank you for your very long critique!
I will implement all of your advice.
On the top of the website and below the prices on every product page is FREE Shipping there. (On product page: Tax and shipping included.) Is it not enough? Should I write a unique Shipping page containing: Yes, everything is free?
There is a Contact page in the menu. Should I write "Us" there too?
And again: thank you for your helpful thoughts!
Hi Spesius -- Great questions and the store is looking great. Just to follow up on your previous questions.
Yes, definitely create a shipping and refunds page where you clearly communicate the whole process for both shipping or a refund. This is great for building trust and transparency.
Contact Us, Contact, Reach Out, Say Hi. It doesn't really matter. So long as you come across as human and that you appear to be at the service of your customers, that's what matters. Hope that helps!
I think everyone has covered all the essentials; just wanted to say I love the favicon!! |
0.999999 | Hey guys, need help on solving question (b) so I know how to approach a similar question in the exam.
I worked this out by using the formula (N! / (N-K)!K!) for each option and multiplied the outcome together: 2x3x3x6 = 108 different combinations .
(b) If a customer buys three coffees, all different, how many possible orders are there? Explain the reasoning behind your answer as well as giving a method and a numerical result.
How would you guys go about this?
108x107x106/1x2x3 I'm sure you can do the arithmetic.
I didn't think the combination would be more than how many options were available though.
For the first coffee there are 108 choices, for the second (different) there are 107, and for the third 106. Since the choice order doesn't matter divide by 3!.
[color=beige]. . [/color]2 appetizers, 5 entrees, 3 desserts.
[color=beige]. . [/color]there are: 2 x 5 x 3 = 30 possible dinners.
There are only 52 cards in a standard deck of cards.
But there are 2,598,960 possible 5-card poker hands. |
0.994344 | Do the bottles come with spray nozzles?
No, this product is a professional use product and has to be applied through a hand pump sprayer. It does not come with a pump sprayer. |
0.99997 | The article gives a good overview of the structure, organization and history of GANIL and indicates the major science subjects where GANIL has contributed and contributes. It includes a thorough account of the SPIRAL2 project: its science aims, the infrastructure to be constructed and the organization.
The date of the status of European Large Scale Facility is quoted as 1993 in the text in the first section, but 1995 in the list later the same section.
In the subsection "The Science Pillars of GANIL" there is a possible ambiguity in the halo-part. The experiment that started the field of halo physics was published in 1985, so in a sense the halo was discovered in 1985 whereas the nuclei 6He, 11Li and 11Be having halos were known much earlier.
Section 3.1, towards the end, in discussion of "NEUTRON Detector" mentions neutron-deficient nuclei explicitly. Maybe neutron-rich nuclei instead (or on top) ?
Section 3.2, around the middle says "After being selected in the ESFRI List(5)". Is "(5)" a missing reference ?
Section 2, 5th sentence "..of the accelerators are as well as a review.."
Section 2, following sentence "Plans for future plans.."
Section 2, at the end, last sentence in subsection "Charge breeding for SPIRAL1": "Such a solution due its.."
Please re-read the paragraph "SPIRAL2 has also remarkable potential.." in section 3 directly following the bullet list. Some phrasings may be improved.
Section 3.1, first paragraph: the first building out of the 5 mentioned is not in the bullet list.
Section 3.1, subsubsection "The SPIRAL2 LINAC": "The first 12 cryomodules...to accelerate contain cavities..", "containing" in the following sentence should be "contain".
This page was last modified on 27 June 2010, at 13:34. |
0.980212 | Category: Automotive And Tech - nosh clean. nurture your soul.
Dating back to 1885, the time of its creation, the bicycle has been the main type of transportation in many regions of the world and still is today. Not only is a bicycle a greener mode of transportation it is also great exercise. Like any of life’s accessories, if you know how to shop, your bicycle can be your most fabulous accouterment and your greatest tool outside of your gym bag. With many varieties to explore, I will take you on a tour of some of the options when it comes to finding the right bicycle for not only your personal style type, but your regions terrain as well.
The most popular type of bike when shopping for the general casual biker, the perfect fit would be the road bike. The road bike, built for traveling at medium to high speeds on paved roads, are built less for fast bursts of speed and more for endurance and long distances when using them to commute or use for exercise. The road bike typically has more gear combinations, similar to a mountain bike and less high tech features making them user-friendly. Tires are usually narrower and high-pressure, rocky or uneven terrain is usually unadvised.
be classified into four categories based on suspension: Rigid, hard tail, soft tail and full. This is important to inquire about when shopping because it will give the consumer insight on how the suspension handles, and how the wheels move. The mountain bike is made for the unpaved environment and therefor used mostly for sport and exercise over general transportation. Mountain bikes differ from road bikes in many ways. First being the larger tires made for gripping rocks and gravel. Second, their brakes are made to be more powerful than a general street bike, and possess lower gear ratios for steeper pathways that have poor traction.
The racing bike is the standout star athlete in the group. Sleek, lightweight and lightning fast these beautiful bikes are built for speed. Competitive bike racing is a widely popular sport, athletes look for the best of the best when purchasing their bicycles. Unlike the bells and whistles of the mountain and road bikes, these bikes have minimal accessories, drop handlebars and narrow tires. The handlebar design allows for a riding position that is aerodynamic and the gear range is minimal. This allows the bike to operate at optimum efficiency. Carbon fiber is used in the higher end models of the racing category and makes for a very lightweight model.
The beach cruiser is built for style, and my personal favorite. Unlike their lightweight counterparts, the beach cruiser is built with a heavy frame and designed for comfort over performance. Padded seats and curved back handlebars allow the rider to sit comfortably and ride with ease. The cruiser model was the bike standard throughout America in the 40’s and 50’s. Traditionally, beach cruisers operate on a single speed and have easy brakes. Today you are able to find models with up to seven speeds, and the introduction of the use of aluminum to design them have made them lighter weight than what they once were. Most popularly seen near the beaches, riding along the boardwalks, beach cruisers are available worldwide.
The tandem bicycle, introduced in 1898, was developed to allow multiple riders to use one bicycle together as transportation. Now used in not only casual biking but racing as well, the tandem bicycle is still popular and relevant today. A popular vacation rental, you can sit two (or more) individuals, one in front of the other, and cruise. In comparison to a traditional one-seat bicycle, the tandem bike has double the pedaling power, as two individuals work together. It does however possess the same wind resistance and weighs less than twice as much as a single weighs. With those factors taken into consideration, a tandem cycle can reach higher speeds faster than a single rider bike making it a good choice for someone who wants to get around quicker, and who loves company.
When you mix a passion for fine automobiles with the love for luxury, beautiful things can materialize. And, when stars have the means to drive lavish transportation, we see that sometimes they let their imaginations run wild. Others, however, opt for more sensible and environmentally friendly options. Below we have chosen our favorite car enthusiasts in fashion, movies and music.
Supermodel Gisele Bundchen opts for sleekness and sophistication in her vehicles. As an owner of a Rolls Royce Ghost, BMW X5, Audi S8 and a Black Audi A8, she has choices when it comes time to tote little Vivan and Benjamin around Brentwood or head to a fitting. Most often seen in her Audi A8, we’ve chosen it in particular to spotlight. The car is equipped with a 3.0 V6 engine and exhibits seamless performance when on the road. Its plush interior makes it a great choice for a family car that is still luxurious and stylishly classic. Would you expect anything less from one of the world’s most successful supermodels?
“Wolf of Wallstreet” star Leonardo Dicaprio drives the king of hybrids. Owner of a Fisker Karma hybrid sports car, a highly impressive and rare vehicle, Dicaprio has been spotted driving this car around Los Angeles. Reaching 60 miles per hour in just under six seconds, with a max speed of 125 miles per hour, this car has the ability to put on a good show and still be a better choice for the environment than your average sports car. Also keeping with the eco-friendly theme, no animal products were used in the interior design of the car. So, if you are looking for not only your lunch to be vegan, but your vehicle as well, talk to Leo. For a cool $100,000, he can hook you up.
George Clooney is not only one of the hottest bachelors to reside in Hollywood; he is also an avid classic car fan and car-racing aficionado. On sunny California days, the Studio City resident can be seen riding casually around town in his 1958 Chevrolet Corvette V8 C1 convertible. This style in particular was considered a style icon in the 50’s with its chrome finished, trendy appearance and double headlamps. A seasonal resident of Italy, Clooney also had the interior refinished in fine Italian leather. Fantastico!
Actress, philanthropist and owner of health conscious cleaning and product company, Honest, Alba tops our list as the most sensible car owner. Her silver Prius shows us that she cares for her environment by not only using clean, earth friendly products in her home, but by driving a car that is cleaner for Mother Earth. We respect the down-to-earth actress, and fashion week front-row spectator, for her choices for not only herself and her family, but for those around her.
The Paper Crown designer shows us, through her ride of choice, that she has moved on from “The Hills” days and onto the finer things in life. A fashionable statement piece itself, her Bentley Continental Flying Spur is both luxurious and pricey. With a price tag starting at $186,000, Conrad cruises around L.A. in the metallic black Bentley that can reach a speed of 62 MPH in just 4.9 seconds. The car, maxing out at an impressive 200 MPH, is not only beautiful, but also high performance in its own right. An added bonus feature: its four-zone climate control system for ultimate comfort.
Though Ms. Jenner is the owner of a blacked out Range Rover that she received for her birthday, she is also a lover of dirt bikes and racecars. So, we thought we would feature her love for fast rides and spotlight her experience driving a Bugatti Veyron Vitesse at the North American Lamborghini Blacpain Super Trofeo races. It is currently one of the fastest street-legal vehicles on the market today, developed by the Volkswagen group, and reaches a top speed of just over 267 miles per hour. Kendall drove it at modest speeds, with her father in the passenger seat, but said it was “still a cool experience”. She has had better luck with automobiles than younger sister Kylie, who just after 2 short weeks of receiving her driver’s license crashed her $125,000 Mercedes. Ouch! That’s got to be a hard thing to live down with all those siblings.
It is no surprise that the controversial starlet would own a car that practically screams, “living on the edge.” Cyrus’ metallic gray McLaren Mp4 is a super sports car that starts at a steep $200,000. Reaching speeds of over 120 MPH in just under 10 seconds, it is practically a racecar itself. The aerodynamic design is sleek, and the interior is spacious and features the highest quality leather throughout.
The gorgeous and talented Mila Kunis opted for safety and security when she chose her Range Rover Overfinch. Though a trendy option in Hollywood, it is a great choice for a star as visible as Kunis. Equipped with a gun cabinet and ammunition, as well as a champagne fridge and a console that holds tumblers and champagne flutes, it is not only a safe vehicle, it is also decadent and luxurious. Powered by a V8 Corvette engine, the Overfinch is a powerful vehicle that can handle most driving situations with ease. An added bonus, it has plush carpeting throughout the interior!
The Barbados native drives a Porsche 997 turbo with an enhanced performance engine. Rihanna shows she can handle a stick, as this car has a six-speed manual transmission and a top speed of over 120 MPH. The Porsche 997 is a fitting ride for a girl who likes to live a little dangerously.
Fashion great Ralph Lauren may be most popularly known for his exquisite designs in men and women’s clothing, among many other branches including his fragrances and accessories, but few know that he is an avid car collector and tops our list of car aficionados. Owning his own Ferrari museum and car garage, the extent of his automobile knowledge and love for all things auto is endless. Included in his vast collection you will find: a 1929 Bentley Blower, 2010 Lamborghini Murcielago Super Veloce, 2006 Bugatti Veyron, 1996 McLaren F1 LM, 1965 Ferrari P2/3 and Ferrari 250 GTO just to name a few. Lauren also owns one of the most valuable and rare Ferraris ever produced. |
0.986437 | One of the hottests topics of artificial intelligence are neural networks. Neural Networks are computational models based on the structure of the brain. These are information processing structures whose most significant property is their ability to learn from data. These techniques have achieved great success in domains ranging from marketing to engineering.
There are many different types of neural networks, from which the multilayer perceptron is the most important one. The characteristic neuron model in the multilayer perceptron is the so called perceptron. In this article we explain the mathematics on this neuron model.
As we have said, a neuron is the main component of a neural network, and the perceptron is the most used model. The following figure is a graphical representation of a perceptron.
The bias b and the synaptic weights (w1,...,wn).
As an example, consider the neuron in the next figure, with three inputs. It transforms the inputs x=(x1, x2, x3) into a single output y.
The inputs (x1, x2, x3).
The neuron parameters, which are the set b=-0.5 and w=(1.0,-0.75,0.25).
The combination function, c(·), which merges the inputs with the bias and the synaptic weights.
The activation function, which is set to be the hyperbolic tangent, tanh(·), and takes that combination to produce the output from the neuron.
The parameters of the neuron consist of a bias and a set of synaptic weights.
The bias b is a real number.
The synaptic weights w=(w1,...,wn) is a vector of size the number of inputs.
Therefore, the total number of parameters in this neuron model is 1+n, being n the number of inputs in the neuron.
The bias is b = -0.5.
The synaptic weight vector is w=(1.0,-0.75,0.25).
The number of parameters in this neuron is 1+3=4.
c = b + ∑ wi· xi i=1,...,n.
Note that the bias increases or reduces the net input to the activation function, depending on whether it is positive or negative, respectively. The bias is sometimes represented as a synaptic weight connected to an input fixed to +1.
The activation function will define the output from the neuron in terms of its combination. In practice, we can consider many useful activation functions. Three of the most used are the logistic, the hyperbolic tangent and the linear functions. Other activation functions which are not derivable, such as the threshold, are not considered here.
The logistic function is represented in the next figure.
As we can see, the image of the logistic function is (0,1). This is a good property for classification applications, because the outputs here can be interpreted in terms of probabilities.
The hyperbolic tangent is represented in the next figure.
The hyperbolid tangent function is very used in approximation applications.
Thus, the output of a neuron with linear activation function is equal to its combination. The linear activation function is plotted in the following figure.
The linear activation function is also very used in approximation applications.
The output calculation is the most important function in the perceptron. Given a set of input signals to the neuron, it coputes the output signal from it. The output function is represented in terms of composition of the combination and the activation functions. The next figure is an activity diagram of how the information is propagated in the perceptron.
As we can see, the output function merges the combination and the activation functions.
A neuron is a mathematical model of the behavior of a single neuron in a biological nervous system.
A single neuron can solve some very simple learning tasks, but the power of neural networks comes when many of them are connected in a network architecture. The architecture of an artificial neural network refers to the number of neurons and the connections between them. The following figure shows a feed-forward network architecture of neurons.
Although in this post we have seen the functioning of the perceptron, there are other neuron models which have different characteristics and are used for different purposes. Some of them are the scaling neuron, the principal components neuron, the unscaling neuron or the probabilistic neuron. In the above picture, scaling neurons are depicted in yellow and unscaling neurons in red.
Customer segmentation using advanced analytics. |
0.999988 | List of famous people named Gene, along with photos. How many celebrities named Gene can you think of? The famous Genes below have many different professions, as this list includes notable actors named Gene, athletes named Gene, and even political figures named Gene. Did we forget your favorite famous person whose name is Gene? Add them to the list.
Gene Wilder is a famous actor known for his famous portrayal of "Willy Wonka" in the classic Willy Wonka and the Chocolate Factory. He's also known for his film collaborations with director Mel Brooks, including Young Frankenstein and Blazing Saddles. He's also received several Golden Globe and Academy Award nominations for his acting and even one for his screenwriting.
Gene Hackman has received a plethora of awards and nominations, including two BAFTA's and 5 Academy Award nominations. Some of his work includes The Royal Tenenbaums and The Birdcage. After his retirement from his acting career, he even took on the world of historical fiction. He's written several novels, one of which is entitled Justice for None.
Gene Fullmer is an American former middleweight boxer and world champion.
Eugene Bertram "Gene" Krupa was an American jazz and big band drummer, band leader, actor, and composer, known for his highly energetic and flamboyant style.
Eugene "Gene" Lockhart was a Canadian character actor, singer, and playwright. He also wrote the lyrics to a number of popular songs. |
0.999997 | From a climate perspective, 2016 in California was a fairly wet year. Northern California reservoirs filled and went in the flood control releases. That should be good news for California's cities and farms in terms of starting to dig out of the deep drought hole that has hit the state in the past several years. Overall, the past 10 years have been the driest on record. Despite full reservoirs in Northern California, to the south of Sacramento, where the San Joaquin Valley is located, large areas are still severely shorted in their water supplies. While many cities have been able to eliminate their water rationing programs, hundreds of thousands of acres of farmland have been allocated a 5-percent water supply by the federal government. To make matters worse, for several farming areas that are customers of the federal Central Valley Project, the Bureau of Reclamation has "borrowed" hundreds of millions of dollars' worth of water with no apparent ability to repay it.
Tensions are running high within the farmer and water manager communities. The reservoir that supplies water to the farmers, as well as provides water to the Silicon Valley, is essentially empty. How does this happen in a year where the reservoirs that typically provide the water are brimful? Enter the Endangered Species Act and politics.
In 2014 and 2015, water managers for the state and federal governments that operate the biggest reservoirs in California were flying by the seat of their pants. The state had not experienced this type of a water shortage before. Rules were being made up as the drought progressed. According to federal biologists, winter run salmon populations were decimated due to warm temperatures. Also, other Endangered Species Act-protected species, such as the spring run Chinook salmon and the tiny Delta smelt, also allegedly saw their population numbers severely dwindled. There is much speculation that the Delta smelt may be extinct. So, in order to avoid fish dying due to warm river temperatures, the state and federal fishery managers ordered that water be held in the upstream reservoirs to provide "cold water" to be released later in the year to lower river temperatures and help save fish. At the same time, other fishery managers were demanding water to be released to protect the Delta smelt, which lives far downstream from these reservoirs in the Sacramento-San Joaquin River Delta. Meanwhile, water normally available for human uses was allowed to flow out to the ocean during times when it could not otherwise serve double duty of meeting fishery needs and human uses. As a result, the problems with California's water management continues to unfold. At this point, what it means for fisheries and farmers is that the system, as currently managed, is unsustainable. |
0.985298 | You have $N$ objects, each with $M$ copies. How many ways are there to take exactly $K$ of them?
The first line of input contains three integers, $N$, $M$ and $K$ respectively, subjected to $1 \leq N, M, K \leq 10^5$.
Output the number of ways. As the number of ways could be large, output them modulo $10^6 + 7$. |
0.999506 | I was playing netball and is started pouring rain out of nowhere and then everyone started spewing. I was so confused. There was lightning and thunder. We all ran into the sheds.
"What? Is what I said.
The silence was deafening. I looked over the road houses were falling down. I started crying. My sister ran out and got shocked by the lightning. Then everyone went and got shocked by lightning. I cried even more!
"What is happening?" I cried.
Then Rosie walked over munching on an apple. I fall, fall to the ground. I woke up, wow!
your story was amazing I felt like I was there OMG. It's like your sister got shocked by lighting It was really funny in my mind to see someone getting shocked by lighting. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.